Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-10 Thread Günther
hello

what i'm thinking about is:
keep it simple

1.
i'm really happy to throw away all sort of tapes.
when you need them, they are not working, are to slow ore
capacity is too small.

use hd*s instead. they are much faster, bigger, cheaper and data are much safer 
on it. for example a external 2gb hdd (usb or better e-sata) is about 100 euro.
buy three of them, copy/sync your files to them, export the drive and keep it 
on a other location. use zfs-send, rsync or do it within windows with robocopy 
to keep files in sync.

2.
move your esxi storage from local to your zfs storage to have zfs-snapshot and
dedup feature. i would prefer nfs over iscsi to have parallel access via cifs.
use a second storage box (or not recomended/ slow use local esxi space) for 
redundancy of your vhd files.


gea

napp-it.org / zfs server
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-09 Thread rwalists
On Mar 8, 2010, at 7:55 AM, Erik Trimble wrote:

 Assume your machine has died the True Death, and you are starting with new 
 disks (and, at least a similar hardware setup).
 
 I'm going to assume that you named the original snapshot 
 'rpool/ROOT/whate...@today'
 
 (1)   Boot off the OpenSolaris LiveCD
 
 
...
 
 (10)  Activate the restored BE:
   # beadm activate New
 
 
 You should now be all set.   Note:  I have not /explicitly/ tried the above - 
 I should go do that now to see what happens.  :-)

If anyone is going to implement this, much the same procedure is documented at 
Simon Breden's blog:

http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/

which walks through the commands for executing the backup and the restore.

--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-08 Thread Svein Skogen
Let's say for a moment I should go for this solution, with the rpool tucked 
away on an usb-stick in the same case as the LTO-3 tapes it matches 
timelinewise (I'm using HP C8017A  kits) as a zfs send -R to a file on the USB 
stick. (If, and that's a big if, I get amanda or bacula to do a job I'm 
comfortable with that has been verified. Not a stab at those software projects, 
more a stab at them being an unknown entity for me), how would I go about 
restoring:

a) the boot record
b) the rpool (and making it actually bootable off the usb stick)
c) the storage zpool (probably after I get the system back up after a and b, 
but please humor me).

Reason I'm coming back here, is ... well the performance with Linux (that I 
actually have software for which I'm comfortable with) wasn't quite what I've 
grown used to with FreeBSD and Solaris.

//Svein
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-08 Thread erik.ableson
On 8 mars 2010, at 11:33, Svein Skogen wrote:

 Let's say for a moment I should go for this solution, with the rpool tucked 
 away on an usb-stick in the same case as the LTO-3 tapes it matches 
 timelinewise (I'm using HP C8017A  kits) as a zfs send -R to a file on the 
 USB stick. (If, and that's a big if, I get amanda or bacula to do a job I'm 
 comfortable with that has been verified. Not a stab at those software 
 projects, more a stab at them being an unknown entity for me), how would I go 
 about restoring:
 
 a) the boot record
 b) the rpool (and making it actually bootable off the usb stick)
 c) the storage zpool (probably after I get the system back up after a and b, 
 but please humor me).

Assuming that you use a USB key/external drive for booting, all you need to do 
is dd it to an identically sized one while the key is not the current boot 
volume (dd if=/dev/disk1 of=/dev/disk2 while on your computer), and there you 
have your boot record and your rpool. Stick in the backup key, tell your BIOS 
to boot from a USB device and you're running. This requires downtime while 
you're creating the copy.

If the disks that make up your storage zpool are still available, it will 
probably automount without any difficulty, or worst case, you'll need to do a 
zpool import -f poolname.  Note that this also brings over all of your zfs 
based sharing configuration (sharenfs  sharesmb) so your clients are back 
online with a minimum of fuss.

No zfs send/recv required in this scenario. Note that there are no dependencies 
between the boot pool and the storage pool.  No timeline matching to worry 
about. Think of data backup and boot volume backup as two entirely distinct 
operations to manage.

In a worst case, ie, you lost the whole machine, you have a boot key and you've 
bought new disks.  The boot process is still the same with no tapes or files 
involved. In this case you'll need to create a new zpool from your new disks 
and restore the data.  The restore process depends on your backup process.  If 
you're using amanda or bacula, you create new zfs filesystems and restore to 
them as per the tool in question.  If you've ignored the current advice and are 
using zfs send streams to tape, you'll start with your baseline tape file and 
pipe the file to zfs recv and the name of the destination filesystem you want 
to create. And pray that there are no errors reading from the tape.

If you're using zfs send/recv to some other kind of external storage like USB 
drives, you just plug them in, zpool import and be back in business right away 
with the option to do a send/recv to clone the filesystems to the new disks.

Or you can go the traditional route (no downtime for the backup process of the 
boot volume), the instructions at: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery
 are quite detailed as to the process involved for both backing up to file and 
restoring.

Erik

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-08 Thread Erik Trimble

Svein Skogen wrote:

Let's say for a moment I should go for this solution, with the rpool tucked away on an 
usb-stick in the same case as the LTO-3 tapes it matches timelinewise (I'm 
using HP C8017A  kits) as a zfs send -R to a file on the USB stick. (If, and that's a big 
if, I get amanda or bacula to do a job I'm comfortable with that has been verified. Not a 
stab at those software projects, more a stab at them being an unknown entity for me), how 
would I go about restoring:

a) the boot record
b) the rpool (and making it actually bootable off the usb stick)
c) the storage zpool (probably after I get the system back up after a and b, 
but please humor me).

Reason I'm coming back here, is ... well the performance with Linux (that I 
actually have software for which I'm comfortable with) wasn't quite what I've 
grown used to with FreeBSD and Solaris.

//Svein
  
Assume your machine has died the True Death, and you are starting with 
new disks (and, at least a similar hardware setup).


I'm going to assume that you named the original snapshot 
'rpool/ROOT/whate...@today'


(1)   Boot off the OpenSolaris LiveCD

(2)   Do a basic install, using whichever disk you intend to be the new 
root.


(3)   Reboot to your new virgin system

(4)   Mount the USB stick

(5)   Make a new Boot Environment to do the restore into:  
   # beadm create New


 - there should now be a zfs filesystem named 'rpool/ROOT/New'

(5)   Do a 'zfs receive', using the stream stored on the USB stick, and 
the destination the rpool. You should likely need to rename the incoming 
snapshot. 
   # cat usb_mounted_stream | zfs receive rpool/ROOT/myroot


(6)   A 'zfs list -t all -r rpool' will now show use the 
'rpool/ROOT/myr...@today' snapshot and 'rpool/ROOT/myroot' filesystem


(7)   Make sure the mountpoint is /
   # zfs set mountpoint=/ rpool/ROOT/myroot


(8)   Destroy the restored snapshot: 
   # zfs destroy rpool/ROOT/myr...@today


(9)   Replace your New filesystem with the restored one:
   # zfs rename rpool/ROOT/New rpool/ROOT/Old
   # zfs rename rpool/ROOT/myroot rpool/ROOT/New
   # zfs destroy rpool/ROOT/Old

(10)  Activate the restored BE:
   # beadm activate New


You should now be all set.   Note:  I have not /explicitly/ tried the 
above - I should go do that now to see what happens.  :-)



Restoring the data pool is dirt simple.  Create your zpool, and simply 
'zfs receive' from the tapes you have. Likely, it's easier to install 
Amanda (or whatever backup program you're using) and have it do the 
restore/tape management for you.





--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-08 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08.03.2010 13:55, Erik Trimble wrote:
 Svein Skogen wrote:
 Let's say for a moment I should go for this solution, with the rpool
 tucked away on an usb-stick in the same case as the LTO-3 tapes it
 matches timelinewise (I'm using HP C8017A  kits) as a zfs send -R to
 a file on the USB stick. (If, and that's a big if, I get amanda or
 bacula to do a job I'm comfortable with that has been verified. Not a
 stab at those software projects, more a stab at them being an unknown
 entity for me), how would I go about restoring:

 a) the boot record
 b) the rpool (and making it actually bootable off the usb stick)
 c) the storage zpool (probably after I get the system back up after a
 and b, but please humor me).

 Reason I'm coming back here, is ... well the performance with Linux
 (that I actually have software for which I'm comfortable with) wasn't
 quite what I've grown used to with FreeBSD and Solaris.

 //Svein
   
 Assume your machine has died the True Death, and you are starting with
 new disks (and, at least a similar hardware setup).
 
 I'm going to assume that you named the original snapshot
 'rpool/ROOT/whate...@today'
 
 (1)   Boot off the OpenSolaris LiveCD
 
 (2)   Do a basic install, using whichever disk you intend to be the new
 root.
 
 (3)   Reboot to your new virgin system
 
 (4)   Mount the USB stick
 
 (5)   Make a new Boot Environment to do the restore into: #
 beadm create New
 
  - there should now be a zfs filesystem named 'rpool/ROOT/New'
 
 (5)   Do a 'zfs receive', using the stream stored on the USB stick, and
 the destination the rpool. You should likely need to rename the incoming
 snapshot.# cat usb_mounted_stream | zfs receive rpool/ROOT/myroot
 
 (6)   A 'zfs list -t all -r rpool' will now show use the
 'rpool/ROOT/myr...@today' snapshot and 'rpool/ROOT/myroot' filesystem
 
 (7)   Make sure the mountpoint is /   # zfs set mountpoint=/
 rpool/ROOT/myroot
 
 (8)   Destroy the restored snapshot:# zfs destroy
 rpool/ROOT/myr...@today
 
 (9)   Replace your New filesystem with the restored one:
# zfs rename rpool/ROOT/New rpool/ROOT/Old
# zfs rename rpool/ROOT/myroot rpool/ROOT/New
# zfs destroy rpool/ROOT/Old
 
 (10)  Activate the restored BE:
# beadm activate New
 
 
 You should now be all set.   Note:  I have not /explicitly/ tried the
 above - I should go do that now to see what happens.  :-)
 
 
 Restoring the data pool is dirt simple.  Create your zpool, and simply
 'zfs receive' from the tapes you have. Likely, it's easier to install
 Amanda (or whatever backup program you're using) and have it do the
 restore/tape management for you.
 
 
 
 

Thank you for taking the time to explain this to me, I finally feel like
I am getting somewhere. ;)

As I started out, I'm a FreeBSD and Windows-user normally, and ... have
been spoiled by one-click backup solutions like Acronis and
DataProtector Express, and by opensource install setups like the
ports-collection in FreeBSD. I've always known that solaris exists, it
is rock solid, and is a master at heavy-io-tasks, but my
hands-on-experience has been slim...

I probably should apologize to the entire list for some of my behavior
in this question-setup. In retrospect I can see that I've been
disrespectful, and probably looking a lot like a troll. This has not
been the intention. My only excuse is the reason for ditching Microsoft
Windows Storage Server 2008 as the backend. A forthnight ago, I had one
of those reasons I make backups, and needed a restore. After that
restore, I found that Windows Storage Server + DataProtector Express had
basically lied to me. It had said everything was backed up and
verified, but that was not the case. On the storage volume, the SIS
(Single Instance Storage, Microsofts file-level-deduplication) had
fooled it. The backup software had correctly backed up the
reparse-points for the sis-managed files, but it had failed to back up
the SIS Common Storage, and as a result I lost the 1500 best
photographies I had since 2003. I think everybody on this list can ...
visualize the mood I was in when I started out. And having this in mind,
I reacted impolitely to people suggesting things like you will never
need that backup, etc. Alas I took out my temper over those who replied
directly to my mailbox (and not on the list) towards the person who I
misinterpreted to be yet another of those, but this one's on the list,
and did a believable impersonation of a genuine internet troll. Again, I
apologize.

I'm now 14 days into trying to get up a proper backend for safekeeping
my work, and ... quite depraved of sleep, so I may keep on asking stupid
questions that quite probably should be answered by friendly pointers to
the Mountain View company's webservices, but I'll post the questions
anyways. (the how to restore part has already been answered, but to
get a proper insight, I'll post the entire 

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Svein Skogen
And again ...

Is there any work on an upgrade of zfs send/receive to handle resuming on next 
media?

I was thinking something along the lines of zfs send (when device goes full) 
returning

send suspended. To resume insert new media and issue zfs resume IDNUMBER

and receive handling:
zfs receive stream ended before end of file system. Insert next media and 
issue zfs resume IDNUMBER


Both of these would give us the ability to do graceful backups (at near 
wirespeed of modern SAS autoloaders) and restores with few additional tools.

With a little tweak, I suspect zstreamdump could handle verifies as well...

//Svein
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Erik Trimble

Svein Skogen wrote:

And again ...

Is there any work on an upgrade of zfs send/receive to handle resuming on next 
media?

I was thinking something along the lines of zfs send (when device goes full) 
returning

send suspended. To resume insert new media and issue zfs resume IDNUMBER

and receive handling:
zfs receive stream ended before end of file system. Insert next media and issue zfs 
resume IDNUMBER


Both of these would give us the ability to do graceful backups (at near 
wirespeed of modern SAS autoloaders) and restores with few additional tools.

With a little tweak, I suspect zstreamdump could handle verifies as well...

//Svein
  
What you are after is 'zfs send|receive' to act just like 
ufsdump|ufsrestore.  Frankly, I don't think that's going to happen 
anytime soon.  dump|receive are badly out-of-date (not just on Solaris, 
but on any OS).  They suck as any sort of larger scale backup system, as 
they are missing any sort of indexing and browsing tools, no tape 
identification and cataloging, and are really only useful for fast 
restore of very limited, well-defined data sets (at this point, OS 
reinstalls, really).


I can certainly see having 'zfs send|receive' being able to split their 
stream into chunks somehow - this would make integration with things 
like Amanda much simpler.  But let's face it: it's /highly/ unlikely 
that you would have just 'zfs send|receive' without also being able to 
get at your add-on backup software, all of which is simple to install 
(and, even if you are using many commercial packages, they're even able 
to be run for a short time without license keys). 

The scenario of my server burned to the ground, and all I've got is the 
OS CD and a single backup tape is pretty much dust with the ancients 
nowdays - if you find yourself in such a scenario, you've either done 
some really poor planning, or civilization has collapsed.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Edward Ned Harvey
 Is there any work on an upgrade of zfs send/receive to handle resuming
 on next media?

Please see Darren's post, pasted below.


 -Original Message-
 From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
 discuss-boun...@opensolaris.org] On Behalf Of Darren Mackay

 mkfifo /tmp/some_pipe
 zfs snapshot ...
 zfs send -I startsnap endsnap  /tmp/some_pipe
 
 then something that is able to backup to tape from a pipe, and manage
 your tapes, etc - use what you are comfortable with, but not all backup
 apps are the same
 
 note - if changing tapes takes too long, the pipe *may* actually time
 out on the send side if you are not careful...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Edward Ned Harvey
 Is there any work on an upgrade of zfs send/receive to handle resuming
 on next media?

See Darren's post, regarding mkfifo.  The purpose is to enable you to use
normal backup tools that support changing tapes, to backup your zfs send
to multiple split tapes.  I wonder though - During a restore, does the
backup tool support writing to another fifo?  Would you have to restore to a
file first, and then do a cat somefile | zfs receive?

Please also be aware, zfs send was never meant to be stored, on tape or
any other media.  It is meant to stream directly into a zfs receive.
Please consider this alternative:

You can create a file container (can be sparse) and create a ZFS filesystem
inside it.  Do a zfs receive into the file.  Then you export the
filesystem, and you can use whatever normal file backup tool you want.
This method has the disadvantage that it requires extra staging disk space,
but it has the advantage that it's far more reliable as a backup/restore
technique.  If there are bit errors inside the tape, ZFS will simply
checksum and correct them.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Erik Trimble

Svein Skogen wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04.03.2010 13:18, Erik Trimble wrote:
  

Svein Skogen wrote:


And again ...

Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?

I was thinking something along the lines of zfs send (when device goes
full) returning

send suspended. To resume insert new media and issue zfs resume
IDNUMBER

and receive handling:
zfs receive stream ended before end of file system. Insert next media
and issue zfs resume IDNUMBER


Both of these would give us the ability to do graceful backups (at
near wirespeed of modern SAS autoloaders) and restores with few
additional tools.

With a little tweak, I suspect zstreamdump could handle verifies as
well...

//Svein
  
  

What you are after is 'zfs send|receive' to act just like
ufsdump|ufsrestore.  Frankly, I don't think that's going to happen
anytime soon.  dump|receive are badly out-of-date (not just on Solaris,
but on any OS).  They suck as any sort of larger scale backup system, as
they are missing any sort of indexing and browsing tools, no tape
identification and cataloging, and are really only useful for fast
restore of very limited, well-defined data sets (at this point, OS
reinstalls, really).

I can certainly see having 'zfs send|receive' being able to split their
stream into chunks somehow - this would make integration with things
like Amanda much simpler.  But let's face it: it's /highly/ unlikely
that you would have just 'zfs send|receive' without also being able to
get at your add-on backup software, all of which is simple to install
(and, even if you are using many commercial packages, they're even able
to be run for a short time without license keys).
The scenario of my server burned to the ground, and all I've got is the
OS CD and a single backup tape is pretty much dust with the ancients
nowdays - if you find yourself in such a scenario, you've either done
some really poor planning, or civilization has collapsed.



Actually, this sounds a lot like if you're not a big enough corporation
to have multiple locations with servers on line, your data just isn't
important enough to store backups of in the first place.
  
No, the point I'm making is that dump/restore are OBSOLETE.  It's been 
replaced by things like Bacula and Amanda (on the freeware side), and 
Networker or NetBackup on the commercial side.  dump/restore are 
hideously out of date, and are severely lacking in features for anyone 
these days.  Performance on them is horrible, too.  The situation is 
analogous to cpio - yes, there are very limited places where its useful, 
but it really has been completely supplanted by tar (esp. gtar).   Go 
look at the new tools, and quit complaining that we haven't completely 
emulated your Old Favorite tool.  Frankly, using dump/restore as your 
backup tools means you are making life /much/ more difficult for 
yourself than necessary.




I'm a one-man photographer. I have the server solution at home. This
home is built from wood, but I have a real solid cellar with a safe
suitable for tapes, and I have a safe off-location place to store 20
tapes (which will be used for rotation).

But, I guess my data simply isn't important enough, since I cant afford
to have two locations with online servers to do real-time sync.

Thanks for making this so clear to me, since this means I'll have to
look at alternatives to Opensolaris sooner, rather than later.

(And as you can read, the situation where all I have left is a stack of
tapes, and a new server from insurance to restore ISN'T dependant on
civilization collapsing, but something that I must prepare for that
_CAN_ happen. But thanks for making it obvious that OpenSolaris+ZFS for
me is a solution in hunt of a problem, rather than the solution for my
setup)

//Svein
Once again, you are missing the point.  What I'm saying is that unless 
you have completely lost all network connectivity to the outside world 
(in which case, you have much bigger problems than your server being 
dead) AND you didn't plan to have a small USB flash drive or DVD sitting 
around,  the old concept of 'restore from just OS media and single 
backup tape' isn't realistic. 

To be clear, you can do what you want with the following items (besides 
your server):


(1) OpenSolaris LiveCD
(1) 8GB USB Flash drive
As many tapes as you need to store your data pools on.

Make sure the USB drive has a saved stream from your rpool.  It should 
also have a downloaded copy of whichever main backup software you use.


That's it. You backup data using Amanda/Bacula/et al onto tape.  You 
backup your boot/root filesystem using 'zfs send' onto the USB key.



You've been given several simple ways to restore from bare metal a ZFS 
filesystem. None require expensive extra hardware, or are any more 
labor-intensive (or sysadm-unfriendly) than doing so on FreeBSD.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, 

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Svein Skogen
Just disregard this thread. I'm resolving the issue using other methods (not 
including Solaris).

//Svein
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread valrh...@gmail.com
Does this work with dedup? If you have a deduped pool and send it to a file, 
will it reflect the smaller size, or will this rehydrate things first?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Ian Collins

valrh...@gmail.com wrote:
Does this work with dedup? 
Does what work?  Context, Please! (I'm reading this on webmail with 
limited history..)



If you have a deduped pool and send it to a file, will it reflect the smaller size, or 
will this rehydrate things first?
  
That depends on the properties of the pool at the receive end or the 
stream.  Assuming you are responding to You can create a file container 
(can be sparse) and create a ZFS filesystem inside it., there is a 
filesystem within a file, so you can tune the properties of that filesystem.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Freddie Cash
On Thu, Mar 4, 2010 at 1:28 PM, valrh...@gmail.com valrh...@gmail.comwrote:

 Does this work with dedup? If you have a deduped pool and send it to a
 file, will it reflect the smaller size, or will this rehydrate things
 first?


zfs send without any options will send the normal (ie non-deduped or
duped) data.

There is a fairly recent option to zfs send that will dedupe the stream as
it is sent, so that you can save a deduped stream to a file.

Depending on the dataset and/or pool setting for dedupe on the receiving
side, the data will either be deduped or not.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04.03.2010 13:18, Erik Trimble wrote:
 Svein Skogen wrote:
 And again ...

 Is there any work on an upgrade of zfs send/receive to handle resuming
 on next media?

 I was thinking something along the lines of zfs send (when device goes
 full) returning

 send suspended. To resume insert new media and issue zfs resume
 IDNUMBER

 and receive handling:
 zfs receive stream ended before end of file system. Insert next media
 and issue zfs resume IDNUMBER


 Both of these would give us the ability to do graceful backups (at
 near wirespeed of modern SAS autoloaders) and restores with few
 additional tools.

 With a little tweak, I suspect zstreamdump could handle verifies as
 well...

 //Svein
   
 What you are after is 'zfs send|receive' to act just like
 ufsdump|ufsrestore.  Frankly, I don't think that's going to happen
 anytime soon.  dump|receive are badly out-of-date (not just on Solaris,
 but on any OS).  They suck as any sort of larger scale backup system, as
 they are missing any sort of indexing and browsing tools, no tape
 identification and cataloging, and are really only useful for fast
 restore of very limited, well-defined data sets (at this point, OS
 reinstalls, really).
 
 I can certainly see having 'zfs send|receive' being able to split their
 stream into chunks somehow - this would make integration with things
 like Amanda much simpler.  But let's face it: it's /highly/ unlikely
 that you would have just 'zfs send|receive' without also being able to
 get at your add-on backup software, all of which is simple to install
 (and, even if you are using many commercial packages, they're even able
 to be run for a short time without license keys).
 The scenario of my server burned to the ground, and all I've got is the
 OS CD and a single backup tape is pretty much dust with the ancients
 nowdays - if you find yourself in such a scenario, you've either done
 some really poor planning, or civilization has collapsed.

Actually, this sounds a lot like if you're not a big enough corporation
to have multiple locations with servers on line, your data just isn't
important enough to store backups of in the first place.

I'm a one-man photographer. I have the server solution at home. This
home is built from wood, but I have a real solid cellar with a safe
suitable for tapes, and I have a safe off-location place to store 20
tapes (which will be used for rotation).

But, I guess my data simply isn't important enough, since I cant afford
to have two locations with online servers to do real-time sync.

Thanks for making this so clear to me, since this means I'll have to
look at alternatives to Opensolaris sooner, rather than later.

(And as you can read, the situation where all I have left is a stack of
tapes, and a new server from insurance to restore ISN'T dependant on
civilization collapsing, but something that I must prepare for that
_CAN_ happen. But thanks for making it obvious that OpenSolaris+ZFS for
me is a solution in hunt of a problem, rather than the solution for my
setup)

//Svein

- -- 
- +---+---
  /\   |Svein Skogen   | sv...@d80.iso100.no
  \ /   |Solberg Østli 9| PGP Key:  0xE5E76831
   X|2020 Skedsmokorset | sv...@jernhuset.no
  / \   |Norway | PGP Key:  0xCE96CE13
|   | sv...@stillbilde.net
 ascii  |   | PGP Key:  0x58CD33B6
 ribbon |System Admin   | svein-listm...@stillbilde.net
Campaign|stillbilde.net | PGP Key:  0x22D494A4
+---+---
|msn messenger: | Mobile Phone: +47 907 03 575
|sv...@jernhuset.no | RIPE handle:SS16503-RIPE
- +---+---
 If you really are in a hurry, mail me at
   svein-mob...@stillbilde.net
 This mailbox goes directly to my cellphone and is checked
even when I'm not in front of my computer.
- 
 Picture Gallery:
  https://gallery.stillbilde.net/v/svein/
- 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.12 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkuPqLQACgkQSBMQn1jNM7YbNQCglVJ8Yzp0CVyvGv5by+JGlMJS
mr0AoNPaCtl8ti0PRdkuJcjws1zvMbBi
=2UXf
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Darren J Moffat

On 04/03/2010 21:28, valrh...@gmail.com wrote:

Does this work with dedup? If you have a deduped pool and send it to a file, will it 
reflect the smaller size, or will this rehydrate things first?


See zfs(1M) for the description of the -D flag to 'zfs send'.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread valrh...@gmail.com
How does this work with an incremental backup?

Right now, I do my incremental backup with:

zfs send -R -i p...@snapshot1 p...@snapshot2 | ssh r...@192.168.1.200 zfs 
receive -dF destination_pool

Does it make sense to put a -D in there, and if so, where? THanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Richard Elling
On Mar 4, 2010, at 4:33 AM, Svein Skogen wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 04.03.2010 13:18, Erik Trimble wrote:
 Svein Skogen wrote:
 And again ...
 
 Is there any work on an upgrade of zfs send/receive to handle resuming
 on next media?
 
 I was thinking something along the lines of zfs send (when device goes
 full) returning
 
 send suspended. To resume insert new media and issue zfs resume
 IDNUMBER
 
 and receive handling:
 zfs receive stream ended before end of file system. Insert next media
 and issue zfs resume IDNUMBER
 
 
 Both of these would give us the ability to do graceful backups (at
 near wirespeed of modern SAS autoloaders) and restores with few
 additional tools.
 
 With a little tweak, I suspect zstreamdump could handle verifies as
 well...
 
 //Svein
 
 What you are after is 'zfs send|receive' to act just like
 ufsdump|ufsrestore.  Frankly, I don't think that's going to happen
 anytime soon.  dump|receive are badly out-of-date (not just on Solaris,
 but on any OS).  They suck as any sort of larger scale backup system, as
 they are missing any sort of indexing and browsing tools, no tape
 identification and cataloging, and are really only useful for fast
 restore of very limited, well-defined data sets (at this point, OS
 reinstalls, really).
 
 I can certainly see having 'zfs send|receive' being able to split their
 stream into chunks somehow - this would make integration with things
 like Amanda much simpler.  But let's face it: it's /highly/ unlikely
 that you would have just 'zfs send|receive' without also being able to
 get at your add-on backup software, all of which is simple to install
 (and, even if you are using many commercial packages, they're even able
 to be run for a short time without license keys).
 The scenario of my server burned to the ground, and all I've got is the
 OS CD and a single backup tape is pretty much dust with the ancients
 nowdays - if you find yourself in such a scenario, you've either done
 some really poor planning, or civilization has collapsed.
 
 Actually, this sounds a lot like if you're not a big enough corporation
 to have multiple locations with servers on line, your data just isn't
 important enough to store backups of in the first place.
 
 I'm a one-man photographer. I have the server solution at home. This
 home is built from wood, but I have a real solid cellar with a safe
 suitable for tapes, and I have a safe off-location place to store 20
 tapes (which will be used for rotation).
 
 But, I guess my data simply isn't important enough, since I cant afford
 to have two locations with online servers to do real-time sync.

horsehockey.  ZFS works fine with traditional backup solutions which also
support tape.  These include Zmanda, Networker, and NetBackup.  For 
more information, see the ZFS Best Practices Guide section on Backup.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_Backup_.2F_Restore_Recommendations
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss