Re: [BackupPC-users] Recovery from archive

2011-06-07 Thread Tyler J. Wagner
On Tue, 2011-06-07 at 01:34 +0100, Timothy Murphy wrote:
 Is there any kind of BackupPC data on the machine itself 
 required for recovery of the archived material?

If your installation is Ubuntu, you need:

/etc/backuppc
/var/lib/backuppc

It's common to have the second of these on its own datastore. If you're
replicating that elsewhere, or recovering the disks, also make sure you
regularly backup /etc/backuppc as well.

To recover the data, just install backuppc on the new server, and move
those two directories to the same place.

Other distros have similar directory structures.

Regards,
Tyler

-- 
No one can terrorize a whole nation, unless we are all his accomplices.
   -- Edward R. Murrow


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Boniforti Flavio
Hello Les and thanks for giving your feedback...

[cut]

 over the whole file.  If they do change and you use rsync, 
 only the differences will be transferred (to the extent that 
 rsync can find them and resync on the matching parts in a 
 huge file), but the server will use the old copy and the 
 differences to reconstruct a full-sized copy which is slow 
 and won't be pooled with anything else.  If the size and rate 

So I'm right when thinking that rsync *does* transfer only the bits of a
file (no matter how big) which have changed, and *not* the whole file?
It wouldn't matter if I don't get anything pooled, I'd just have to
choose the correct filesize dimension to store every copy of that VM
image.

 of change makes this impractical, there are some more 
 efficient approaches you could try that would make an 
 intermediate delta-based backup.

Well, size is a critical parameter, because I can suppose that VM images
are quite *big* files.
But if the data transfer could be reduced by using rsync (over ssh of
course), there's no problem because the initial transfer would be done
by importing the VM images from a USB HDD. Therefore, only subsequent
backups (rsyncs) would transfer data.

What do you think?

Kind regards,
Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch 

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Holger Parplies
Hi,

Boniforti Flavio wrote on 2011-06-07 11:00:24 +0200 [Re: [BackupPC-users] 
Backup of VM images]:
 [...]
 So I'm right when thinking that rsync *does* transfer only the bits of a
 file (no matter how big) which have changed, and *not* the whole file?

usually that's correct. Presuming rsync *can* determine which parts have
changed, and presuming these parts *can* be efficiently transferred. For
example, changing every second byte in a file obviously *won't* lead to a
reduction of transfer bandwidth by 50%. So it really depends on *how* your
files change.

 [...]
 Well, size is a critical parameter, because I can suppose that VM images
 are quite *big* files.
 But if the data transfer could be reduced by using rsync (over ssh of
 course), there's no problem because the initial transfer would be done
 by importing the VM images from a USB HDD. Therefore, only subsequent
 backups (rsyncs) would transfer data.
 
 What do you think?

First of all, you keep saying VM images, but you don't mention from which VM
product. Nobody says VM images are simple file based images of what the virtual
disk looks like. They're some opaque structure optimized for whatever the
individual VM product wants to handle efficiently (which is probably *not*
rsyncability). Black boxes, so to say. There are probably people on this list
who can tell you from experience how VMware virtual disks behave (or VirtualBox
or whatever), and it might even be very likely that they all behave in similar
ways (such as changing roughly the same amount of the virtual disk file for the
same amount of changes within the virtual machine), but there's really no
guarantee for that. You should try it out and see what happens in your case.

Secondly, you say that the images are already somewhere, and your
responsibility is simply to back them up. Hopefully, your client didn't have
the smart idea to also encrypt the images and simply forget to tell you.
Encryption would pretty much guarantee 0% rsync savings.

Thirdly, as long as things work as they are supposed to, you are probably
fine. But what if something malfunctions and, say, your client mistakenly
drops an empty (0 byte) file for an image one day (some partition may have
been full and an automated script didn't notice)? The backup of the 0-byte
file will be quite efficient, but I don't want to think about the next
backup. That may only be a problem if the 0-byte file actually lands in a
backup that is used as a reference backup, but it's an example meant to
illustrate that you *could* end up transferring the whole data set, and you
probably won't notice until it congests your links. Nothing will ever
malfunction? Ok, a virtual host is probably perfectly capable of actually
*changing* the complete virtual disk contents if directed to (system update,
encrypting the virtual host's file systems, file system defragmentation
utility, malicious clobbering of data by an intruder ...). rsync bandwidth
savings are a fine thing. Relying on them when you have no control over the
data you are transferring may not be wise, though.
And within BackupPC may not be the best place to handle problems. For
instance, if you first made a local copy of the images and then backed up
that *copy*, you could script just about any checks you want to, use bandwidth
limiting, abort transfers of single images that take too long, use a
specialized tool that handles your VM images more efficiently than rsync,
split your images after transferring ... it really depends on what guarantees
you are making, what constraints you want (or need) to apply, how much effort
you want to invest (and probably other things I've forgotten).

Hope that helps.

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recovery from archive

2011-06-07 Thread Timothy Murphy
Tyler J. Wagner wrote:

 On Tue, 2011-06-07 at 01:34 +0100, Timothy Murphy wrote:
 Is there any kind of BackupPC data on the machine itself
 required for recovery of the archived material?
 
 If your installation is Ubuntu, you need:
 
 /etc/backuppc
 /var/lib/backuppc
 
 It's common to have the second of these on its own datastore. If you're
 replicating that elsewhere, or recovering the disks, also make sure you
 regularly backup /etc/backuppc as well.
 
 To recover the data, just install backuppc on the new server, and move
 those two directories to the same place.
 
 Other distros have similar directory structures.

I should have said that I am running CentOS-5.6 on my BackupPC machine.
Also /var/lib/BackupPC is linked to another partition /BackupPC .
I can see that I should save /etc/backuppc ,
but is it really necessary to save /var/lib/BackupPC/ ?
This seems to be exactly what BackupPC has archived,
and is almost the same size as the archive file
(48GB archive file against 34GB /BackupPC in my case).

(I have actually copied everything to another computer,
but that was just so I can copy back to another disk.)


-- 
Timothy Murphy  
e-mail: gayleard /at/ eircom.net
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recovery from archive

2011-06-07 Thread Tyler J. Wagner
On Tue, 2011-06-07 at 12:43 +0100, Timothy Murphy wrote:
 I can see that I should save /etc/backuppc ,
 but is it really necessary to save /var/lib/BackupPC/ ?
 This seems to be exactly what BackupPC has archived,
 and is almost the same size as the archive file
 (48GB archive file against 34GB /BackupPC in my case).

No, it is not necessary. I misread your original post. You just need the
archive, plus /etc/backuppc.

However, you should test this. Try restoring, add a line to disable all
scheduled jobs, and start backuppc on your backup system. Verify you can
browse and restore files.

Regards,
Tyler


-- 
Humanity is disappointing, but it's nothing personal.
   -- Jayme Wilmore


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Boniforti Flavio
Hello Holger,

I was actually missing your reply ;-)

  What do you think?
 
 First of all, you keep saying VM images, but you don't 
 mention from which VM product. Nobody says VM images are 
 simple file based images of what the virtual disk looks like. 

[cut]

Yes, you are right and my customer has yet to give me some more specs
about *which product* is producing the VM images. But in the end, I bet
it won't matter too much because, as you said, those are opaque
structures, not developed for rsyncability.

 Secondly, you say that the images are already somewhere, and 
 your responsibility is simply to back them up. Hopefully, 
 your client didn't have the smart idea to also encrypt the 
 images and simply forget to tell you.
 Encryption would pretty much guarantee 0% rsync savings.

That's another point I have to clear with my customer.

 Thirdly, as long as things work as they are supposed to, you 
 are probably fine. But what if something malfunctions and, 
 say, your client mistakenly drops an empty (0 byte) file for 
 an image one day (some partition may have been full and an 
 automated script didn't notice)? The backup of the 0-byte 
 file will be quite efficient, but I don't want to think about 
 the next backup. That may only be a problem if the 0-byte 
 file actually lands in a backup that is used as a reference 

[cut]

Indeed, that would be a *big* problem!
And as you point at this kind of issue: it *did* happen to me, that a
customer moved a directory which contained about 20 GB of data!!!

 And within BackupPC may not be the best place to handle 
 problems. For instance, if you first made a local copy of the 
 images and then backed up that *copy*, you could script just 
 about any checks you want to, use bandwidth limiting, abort 
 transfers of single images that take too long, use a 
 specialized tool that handles your VM images more efficiently 
 than rsync, split your images after transferring ... it 
 really depends on what guarantees you are making, what 
 constraints you want (or need) to apply, how much effort you 
 want to invest (and probably other things I've forgotten).

hehehe... I see what you're pointing at... I simply thought about using
BackupPC because I'm not doing anything else than configure an ssh
tunnel and rsync stuff trough it. But in this particular scenario, it is
not the best solution.

Thanks for your detailed thoughts, very helpful as usual! ;-)

Kind regards,
Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch 

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recovery from archive

2011-06-07 Thread Les Mikesell
On 6/7/11 6:43 AM, Timothy Murphy wrote:

 I should have said that I am running CentOS-5.6 on my BackupPC machine.
 Also /var/lib/BackupPC is linked to another partition /BackupPC .
 I can see that I should save /etc/backuppc ,
 but is it really necessary to save /var/lib/BackupPC/ ?
 This seems to be exactly what BackupPC has archived,
 and is almost the same size as the archive file
 (48GB archive file against 34GB /BackupPC in my case).

I don't understand a differences in size when you say they are linked.

 (I have actually copied everything to another computer,
 but that was just so I can copy back to another disk.)

Did you copy in a way that preserves the hard links within the archive?

-- 
   Les Mikesell
lesmikes...@gmail.com

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Les Mikesell
On 6/7/11 4:00 AM, Boniforti Flavio wrote:
 Hello Les and thanks for giving your feedback...

 [cut]

 over the whole file.  If they do change and you use rsync,
 only the differences will be transferred (to the extent that
 rsync can find them and resync on the matching parts in a
 huge file), but the server will use the old copy and the
 differences to reconstruct a full-sized copy which is slow
 and won't be pooled with anything else.  If the size and rate

 So I'm right when thinking that rsync *does* transfer only the bits of a
 file (no matter how big) which have changed, and *not* the whole file?
 It wouldn't matter if I don't get anything pooled, I'd just have to
 choose the correct filesize dimension to store every copy of that VM
 image.

Yes, and you can add the '-C' option to the ssh command in the config to get 
some compressions on the connection.   The transfer will still be slow because 
your server will be reading/uncompressing the old file and copying bits from it 
as it writes the new one.

 of change makes this impractical, there are some more
 efficient approaches you could try that would make an
 intermediate delta-based backup.

 Well, size is a critical parameter, because I can suppose that VM images
 are quite *big* files.
 But if the data transfer could be reduced by using rsync (over ssh of
 course), there's no problem because the initial transfer would be done
 by importing the VM images from a USB HDD. Therefore, only subsequent
 backups (rsyncs) would transfer data.

 What do you think?

I don't think backuppc is the best tool for this job but it's not impossible 
for 
it to work.  Whether it is practical or not will depend on the total size and 
rate of change along with your bandwidth and server speed.  One other issue you 
may have to deal with is making sure your backup does not overlap with any 
changes at the local side.  You mentioned that these are already 'copies' but 
if 
they are frequently overwritten you have to be sure that you have gotten a 
consistent snapshot before it changes or it will be unlikely to work when 
restored.  You should also test restoring one so you know what kind of time to 
expect when/if it is needed.

-- 
Les Mikesell
 lesmikes...@gmail.com

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Andrew Schulman
 If they don't change between runs, backuppc will pool the new instance 
 with the previous, although a full backup may still take a long time as 
 the block checksum verification is done over the whole file.  If they do 
 change and you use rsync, only the differences will be transferred (to 
 the extent that rsync can find them and resync on the matching parts in 
 a huge file), but the server will use the old copy and the differences 
 to reconstruct a full-sized copy which is slow and won't be pooled with 
 anything else.

Although this is true generally, I don't think it applies in this case.
What you say is true in the case of a single file that has changes in it.
Then rsync efficiently transfers only the delta.

But that doesn't apply in this case, because backuppc doesn't change the
existing VM image in the storage pool.  Instead, it creates an entire new
file, which then has to be transferred completely.  Even if the new file is
99.99% identical some other file in the pool, it won't help because rsync
isn't comparing that file to every other file in the pool.  It's only
comparing the source and target copies of the new file, and the target
doesn't exist yet, so it has to be copied completely from the source.

Someone please correct me if I'm wrong about that.
Andrew.


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Jim Kyle
On Tuesday, June 7, 2011, at 4:54:26 AM, Holger Parplies wrote:

 There are probably people on this list who can tell you from experience
 how VMware virtual disks behave (or VirtualBox or whatever), and it might
 even be very likely that they all behave in similar ways (such as
 changing roughly the same amount of the virtual disk file for the same
 amount of changes within the virtual machine), but there's really no
 guarantee for that. You should try it out and see what happens in your
 case.

I'm one of those people; I had to dig into the structure of VirtualBox's
VDI file structure after a problem truncated one of them and rendered my
Email files useless.

They have a metadata area at the beginning, of several thousand bytes, but
following that are exact images of the structure you would see from a
physical disk. There's one exception: they may be sparse. However, a 20-GB
virtual disk will actually be 20 GB in size unless it's created as a
dynamic disk and it's not clear to me whether the unused portions will be
in the image, or whether the sparse-array approach will be handled by the
metadata.

Were I attempting to back up the images I would assume that the unused
areas would be included. My solution to backing up my VMs was to install
backuppc for each of them and treat them the same as physical machines on
my net. This did lead to problems backing up Win2K and WinXP VMs, but only
those already fully addressed for physical systems.

-- 
Jim Kyle
mailto: j...@jimkyle.com


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recovery from archive

2011-06-07 Thread Timothy Murphy
Les Mikesell wrote:

 I can see that I should save /etc/backuppc ,
 but is it really necessary to save /var/lib/BackupPC/ ?
 This seems to be exactly what BackupPC has archived,
 and is almost the same size as the archive file
 (48GB archive file against 34GB /BackupPC in my case).
 
 I don't understand a differences in size when you say they are linked.

/BackupPC is linked to /var/lib/BackupPC ;
the archive file is on my external disk:
48835580 -rw-r- 1 backuppc backuppc 
  50007623015 Jun  7 03:54 helen.481.tar.gz

 (I have actually copied everything to another computer,
 but that was just so I can copy back to another disk.)
 
 Did you copy in a way that preserves the hard links within the archive?

Yes, I started with rsync -auvz but found this was giving
an enormously large copy, much larger than the original.
So I changed to rsync -auvzH and now it is the same size.

-- 
Timothy Murphy  
e-mail: gayleard /at/ eircom.net
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Jeffrey J. Kosowsky
Andrew Schulman wrote at about 09:31:14 -0400 on Tuesday, June 7, 2011:
   If they don't change between runs, backuppc will pool the new instance 
   with the previous, although a full backup may still take a long time as 
   the block checksum verification is done over the whole file.  If they do 
   change and you use rsync, only the differences will be transferred (to 
   the extent that rsync can find them and resync on the matching parts in 
   a huge file), but the server will use the old copy and the differences 
   to reconstruct a full-sized copy which is slow and won't be pooled with 
   anything else.
  
  Although this is true generally, I don't think it applies in this case.
  What you say is true in the case of a single file that has changes in it.
  Then rsync efficiently transfers only the delta.
  
  But that doesn't apply in this case, because backuppc doesn't change the
  existing VM image in the storage pool.  Instead, it creates an entire new
  file, which then has to be transferred completely.  Even if the new file is
  99.99% identical some other file in the pool, it won't help because rsync
  isn't comparing that file to every other file in the pool.  It's only
  comparing the source and target copies of the new file, and the target
  doesn't exist yet, so it has to be copied completely from the source.
  
  Someone please correct me if I'm wrong about that.
I believe this is what happens...

If the file name  path is unchanged then BackupPC/rsync knows to
compare it with the existing pooled file. The file has to be read and
checksum'd on both ends (and possibly decompressed on the server side
if using the cpool) and if there are *any* changes then a new version is
constructed and written to the pool based on the delta and the
existing pooled version. However, only the deltas and not the entire
file is transferred across the slow WAN link -- which is the point of
this thread.

So the speed will be limited primarily by the read/write/decompression
speed on the client  server plus the overhead of rsync's delta
algorithm. If changes are limited, then the WAN speed will generally
not be rate limiting.


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Boniforti Flavio
Hello Jim

[cut]

 Were I attempting to back up the images I would assume that 
 the unused areas would be included. My solution to backing up 
 my VMs was to install backuppc for each of them and treat 
 them the same as physical machines on my net. This did lead 
 to problems backing up Win2K and WinXP VMs, but only those 
 already fully addressed for physical systems.

I understand you are/were working on the same LAN.

My trouble begins at the point where there are 15km between the HQ and
the backup location!

Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch 

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Les Mikesell
On 6/7/2011 9:51 AM, Boniforti Flavio wrote:
 Hello Jim

 [cut]

 Were I attempting to back up the images I would assume that
 the unused areas would be included. My solution to backing up
 my VMs was to install backuppc for each of them and treat
 them the same as physical machines on my net. This did lead
 to problems backing up Win2K and WinXP VMs, but only those
 already fully addressed for physical systems.

 I understand you are/were working on the same LAN.

 My trouble begins at the point where there are 15km between the HQ and
 the backup location!

That's a problem that a sufficient amount of money can solve, with 
'sufficient' varying wildly depending on your location and network 
providers.   But in any case it is likely to be more efficient to back 
up the live machines (virtual or otherwise) than their disk images - and 
that way you also get useful pooling for the storage.

One other point that I'm not sure anyone mentioned yet is that the rsync 
comparison is normally against the previous full run, so it will be 
important to either do only fulls or set incremental levels to make each 
run backed by the previous so the differences don't accumulate over time.

-- 
   Les Mikesell
   lesmikes...@gmail.com

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc and Win7

2011-06-07 Thread owens






- Original Message -
From: Holger Parplies 
To: General list for user discussion,   questions and support
Sent: 6/6/2011 10:03:45 PM
Subject: Re: [BackupPC-users] backuppc and Win7


Hi,

higuita wrote on 2011-06-05 22:10:28 +0100 [Re: [BackupPC-users] backuppc and 
Win7]:
 On Sun, 5 Jun 2011 13:15:16 -0700, ow...@netptc.net wrote:
  all other machines on the LAN) and pings work fine. Backuppc fires off
  the error Get fatal error during xfer (no files dumped for share C$).
  Any suggestions?
 
 Yes... share the c drive... depending on your config, you might
 not have the c$ share enabled by default (or restricted by the firewall
 and network settings).

shouldn't that lead to a different error? As I read the source, no files
dumped for share is produced for a dump where no other error was detected,
yet no files were read - perhaps due to insufficient permissions to access
any files (oh, and it's *Got* fatal error ..., so we know it's only an
approximation of the actual error message ;-).

More context from the XferLOG would probably help in determining the actual
problem. Are there error messages which BackupPC couldn't interpret leading
up to the no files dumped ...? Have you tried running the exact same
command BackupPC is trying from the command line? Can you connect to the
client machine with 'smbclient' (something like
'smbclient //host/C\$ -U backup-user', if 'backup-user' is what BackupPC is
using; see the man page for details) and list the contents of the share?

Regards,
Holger 

Holger and Tom et al
Due the the frequent MS warnings about sharing the C drive I decided to merely 
create a shared folder called Users.  This folder is accessible from the 3 
other machines on the network (XP and Vista) without difficulty.  When I 
attempt to do a full backup I again get the same error message.  Here is the 
full error from the Log:

Running: /usr/bin/smbclient larryshp\\Users -I 10.10.10.101 -U backuppc -E 
-d 1 -c tarmode\ full -Tc -full backup started for share UsersXfer PIDs are now 
3355,3354session setup failed: SUCCESS - 0session setup failed: SUCCESS - 
0tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
filesTotal, 0 sizeTotalGot fatal error during xfer (No files dumped for share 
Users)Backup aborted (No files dumped for share Users)Not saving this as a 
partial backup since it has fewer files than the prior one (got 0 and 0 files 
versus 0)The computer name and IP address are correct but the session setup 
fails.  Any suggestions of where to go from here?  BTW thanks Tom for the rsync 
instructions but since I already have 3 machines operation successfully I'd 
like to try the samba approach before I throw the baby in the sea.

Larry

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Boniforti Flavio
Hello again Les...


Il 07.06.11 17.42, Les Mikesell lesmikes...@gmail.com ha scritto:

 That's a problem that a sufficient amount of money can solve, with
 'sufficient' varying wildly depending on your location and network
 providers.   But in any case it is likely to be more efficient to back
 up the live machines (virtual or otherwise) than their disk images - and
 that way you also get useful pooling for the storage.

Well, I don't know *why* they ask me to do integral backups of their images,
but I simply can guess: it's because they eventually want to be able to go
over to that remote location with their USB HDD and copy that image over,
place it on the server and run it!
 
 One other point that I'm not sure anyone mentioned yet is that the rsync
 comparison is normally against the previous full run, so it will be
 important to either do only fulls or set incremental levels to make each
 run backed by the previous so the differences don't accumulate over time.

Could you please depict a bit more in depth this part?
AFAIU, if I do too many incrementals I'd have to take in account growing
backup times from differential to differential.
On the other hand, if I'd do only full backups, I'd have way longer backup
times, for *each* single backup shot.

Is the above right or am I missing something?

Thanks again and kind regards,

Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
Url: http://www.piramide.ch
E-mail: fla...@piramide.ch--



--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Andrew Schulman
 If the file name  path is unchanged then BackupPC/rsync knows to
 compare it with the existing pooled file. The file has to be read and
 checksum'd on both ends (and possibly decompressed on the server side
 if using the cpool) and if there are *any* changes then a new version is
 constructed and written to the pool based on the delta and the
 existing pooled version. However, only the deltas and not the entire
 file is transferred across the slow WAN link -- which is the point of
 this thread.

OK, I think you're right.  I think I was thinking of a different case, of
backing up the backup file system to a remote site.  Thanks.


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Les Mikesell
On 6/7/2011 11:22 AM, Boniforti Flavio wrote:

 That's a problem that a sufficient amount of money can solve, with
 'sufficient' varying wildly depending on your location and network
 providers.   But in any case it is likely to be more efficient to back
 up the live machines (virtual or otherwise) than their disk images - and
 that way you also get useful pooling for the storage.

 Well, I don't know *why* they ask me to do integral backups of their images,
 but I simply can guess: it's because they eventually want to be able to go
 over to that remote location with their USB HDD and copy that image over,
 place it on the server and run it!

That makes sense in terms of being conceptually easy, but it still may 
not be practical to get clean copies in a given time span.

 One other point that I'm not sure anyone mentioned yet is that the rsync
 comparison is normally against the previous full run, so it will be
 important to either do only fulls or set incremental levels to make each
 run backed by the previous so the differences don't accumulate over time.

 Could you please depict a bit more in depth this part?
 AFAIU, if I do too many incrementals I'd have to take in account growing
 backup times from differential to differential.
 On the other hand, if I'd do only full backups, I'd have way longer backup
 times, for *each* single backup shot.

There is a tradeoff between the way files that have and haven't changed 
are handled.  In an incremental, unchanged files as determined by 
timestamp/length are skipped quickly where fulls will read through the 
file contents doing a block checksum verify.   Changed files are 
processed by sending the differences from the previous full or 
appropriate merged incremental level. Full runs rebuild the tree for the 
next comparison.   If you are backing up a directory of images that all 
change on every run, you might as well do fulls every time.  If only a 
subset of the files well have changes then incrementals will be faster 
but you have to rebase the tree eventually.  If they do something like 
copy each image snapshot to a new filename (perhaps with a timestamp), 
there won't be any good way to handle it.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 06/07/2011 11:42:56 AM:

 On 6/7/2011 9:51 AM, Boniforti Flavio wrote:
  Hello Jim
 
  [cut]
 
  Were I attempting to back up the images I would assume that
  the unused areas would be included. My solution to backing up
  my VMs was to install backuppc for each of them and treat
  them the same as physical machines on my net. This did lead
  to problems backing up Win2K and WinXP VMs, but only those
  already fully addressed for physical systems.
 
  I understand you are/were working on the same LAN.
 
  My trouble begins at the point where there are 15km between the HQ and
  the backup location!
 
 That's a problem that a sufficient amount of money can solve, with 
 'sufficient' varying wildly depending on your location and network 
 providers.   But in any case it is likely to be more efficient to back 
 up the live machines (virtual or otherwise) than their disk images - and 

 that way you also get useful pooling for the storage.

I strongly recommend both:  BackupPC to back up inside of the virtual 
machines, and some sort of regular (say, monthly or weekly) snapshot 
backup of the entire VM image.

Trying to restore a single file from a snapshotted VM is a *lot* harder 
than using BackupPC to do it.  But using BackupPC to try to restore a 
crashed VM is a *lot* harder than using a snapshot backup (and then using 
BackupPC to make sure the files are up to date).

 One other point that I'm not sure anyone mentioned yet is that the rsync 

 comparison is normally against the previous full run, so it will be 
 important to either do only fulls or set incremental levels to make each 

 run backed by the previous so the differences don't accumulate over 
time.

I'm not sure that really matters.  The bandwidth usage will be similar to 
the difference between incrementals and fulls in a traditional BackupPC 
setup (i.e. you already have those same bandwidth issues:  the VM's aren't 
going to make it worse).  The biggest problem is that *every* backup is 
going to have to read 100% of the data every time.  In other words, there 
really is no such thing as an incremental:  an incremental and a full will 
read the same amount of data on both ends.

You could certainly use BackupPC for backing up VM's:  it's just a matter 
of scale.  But having an aged series of snapshot backups makes *very* 
little sense.  You will *NOT* want to use your snapshot backups to try to 
pull back old files.  You really just want a handful of very recent copies 
(in case one is bad or you make some sort of catastrophic change you want 
to back out).  Your aged series of backups should be done at the file 
level (inside of the VM), and that's 100% a standard BackupPC solution.

Timothy J. Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Les Mikesell
On 6/7/2011 1:04 PM, Timothy J Massey wrote:

 I'm not sure that really matters. The bandwidth usage will be similar to
 the difference between incrementals and fulls in a traditional BackupPC
 setup (i.e. you already have those same bandwidth issues: the VM's
 aren't going to make it worse). The biggest problem is that *every*
 backup is going to have to read 100% of the data every time. In other
 words, there really is no such thing as an incremental: an incremental
 and a full will read the same amount of data on both ends.

But if bandwidth is the bottleneck and rsync succeeds at finding the 
matching parts, having a closer matching file should be faster.

 You could certainly use BackupPC for backing up VM's: it's just a matter
 of scale. But having an aged series of snapshot backups makes *very*
 little sense. You will *NOT* want to use your snapshot backups to try to
 pull back old files. You really just want a handful of very recent
 copies (in case one is bad or you make some sort of catastrophic change
 you want to back out). Your aged series of backups should be done at the
 file level (inside of the VM), and that's 100% a standard BackupPC
 solution.

While the typical use would be to revive the latest copy after some sort 
of disaster, I wouldn't rule out wanting older versions too.  For 
example if you had a security intrusion or an update-gone-wrong, you 
might want to back out to something older and known-good.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Best version of rsync.exe to use with cygwin?

2011-06-07 Thread Steven Johnson
Greetings what's the best version of Rsync.exe to use on Windows machine
with Cygwin. I've been using version 2.6.8 but experience very slow transfer
rates and often get the file has vanished:.. error in my xfer logs.
They're up to version 3.08 of rsync, does anyone have experience with this
version. Does anyone have a precompiled exe that I can test/use? Thanks in
advance!

-Steven


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best version of rsync.exe to use with cygwin?

2011-06-07 Thread Michael Stowe

It's not just rsync.exe that's important, it's also your cygwin dll's. 
That said, this is the version that works well for me:

rsync  version 3.0.3  protocol version 30
Copyright (C) 1996-2008 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints,
socketpairs, hardlinks, symlinks, no IPv6, batchfiles, inplace,
append, ACLs, no xattrs, iconv, symtimes, preallocation

My Posix Emulation DLL is 1.7.0, which is equally, if not more, important.


 Greetings what's the best version of Rsync.exe to use on Windows machine
 with Cygwin. I've been using version 2.6.8 but experience very slow
 transfer
 rates and often get the file has vanished:.. error in my xfer logs.
 They're up to version 3.08 of rsync, does anyone have experience with this
 version. Does anyone have a precompiled exe that I can test/use? Thanks in
 advance!

 -Steven


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best version of rsync.exe to use with cygwin?

2011-06-07 Thread Carl Wilhelm Soderstrom
On 06/07 02:45 , Steven Johnson wrote:
 Greetings what's the best version of Rsync.exe to use on Windows machine
 with Cygwin. I've been using version 2.6.8 but experience very slow transfer
 rates and often get the file has vanished:.. error in my xfer logs.
 They're up to version 3.08 of rsync, does anyone have experience with this
 version. Does anyone have a precompiled exe that I can test/use? Thanks in
 advance!

I just install the full set of cygwin tools on the client machines in
question; and install whatever the latest version is. Apart from minor
config file changes in rsyncd.conf (notably, replacing c:/Documents and
Settings with /cygdrive/c/Documents and Settings); haven't had any
problems.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Timothy J Massey
Boniforti Flavio fla...@piramide.ch wrote on 06/07/2011 12:22:18 PM:

 Could you please depict a bit more in depth this part?
 AFAIU, if I do too many incrementals I'd have to take in account growing
 backup times from differential to differential.
 On the other hand, if I'd do only full backups, I'd have way longer 
backup
 times, for *each* single backup shot.

Not in this case.

In a normal (data in zillions of files) environment, an incremental 
backup can skip reading 99% of the files because they will not have 
changed.  In a VM (data in a few very large files) environment, every 
one of the files will have changed every day.  Therefore, both 
incrementals and fulls will read *exactly* the same amount of data:  all 
of it.

The only difference between fulls and incrementals, then, is how much data 
is *transferred*.  Incrementals will grow during the week:  if you change 
an average of 1GB per day, then the incremental will transfer 1GB on the 
first day, 2GB on the second, 3GB on the third, etc. until it does the 
next full, when it will then reset and start again.

That, by the way, is *exactly* the same thing that will happen in a 
normal enviornment, too.  Most of us simply do not care, even in 
low-bandwidth environments, because our deltas are still small enough that 
it really doesn't matter.  For example, my incrementals on a remote office 
vary from 30 minutes to 300 minutes.  5 hours in the middle of the night 
is not at all an issue for me.

Anyway, I stand by my (and most everyone else's) point:  BackupPC will do 
this job fine.  HOWEVER, the usage pattern of this project does *not* fit 
the strengths of BackupPC:  you will get almost no advantage from using 
BackupPC than most any other tool.  In fact, most of the features of 
BackupPC (pooling and long-term aging) are completely useless for this 
application.

By the way, I have seen very little acknowledgement on your part of what 
is by *far* the hardest part of snapshot-level backups:  the snapshots. 
How are you quiescing the targets?  How are you getting exclusive access 
to the datafiles?  Will it result in downtime for your VM's and is that 
acceptable?  If not, how are you getting around this?

Backup at the VM level looks *nothing* like backup at the filesystem 
level, and most people have almost no understanding of this.  You are 
*far* from alone in this, which is why there are a bunch of companies that 
make snapshot-level tools for backup for VM's.  They are *far* superior to 
trying to bend your thumb back to your wrist and make BackupPC do the job.

Timothy J. Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 06/07/2011 02:29:13 PM:

 On 6/7/2011 1:04 PM, Timothy J Massey wrote:
 
  I'm not sure that really matters. The bandwidth usage will be similar 
to
  the difference between incrementals and fulls in a traditional 
BackupPC
  setup (i.e. you already have those same bandwidth issues: the VM's
  aren't going to make it worse). The biggest problem is that *every*
  backup is going to have to read 100% of the data every time. In other
  words, there really is no such thing as an incremental: an incremental
  and a full will read the same amount of data on both ends.
 
 But if bandwidth is the bottleneck and rsync succeeds at finding the 
 matching parts, having a closer matching file should be faster.

Sure, but is the difference compelling?   Probably not.  If it is, you're 
*way* too close to the margins anyway.  What happens if the office has a 
busy day?  You're going to run out of bandwidth anyway.

Part of this has to do with what your deltas look like, but there is *NO* 
difference between the bandwidth used backing up VM's as it is a normal 
filesystem-based backup.  The deltas will be identical.  And how many of 
us run daily fulls because we don't have the time for the increasing 
incrementals?  I would venture to say *very* few...

 While the typical use would be to revive the latest copy after some sort 

 of disaster, I wouldn't rule out wanting older versions too.  For 
 example if you had a security intrusion or an update-gone-wrong, you 
 might want to back out to something older and known-good.

Have you actually used virtualization?  This sounds OK in theory, but not 
in practice!  :)

That is what live snapshots are for.  How long are you going to want to go 
back?  Trust me, if you don't want to go back within a few hours, you are 
*NOT* going to want to go back even days later.  In which case, your aged 
backups are *USELESS*.

With virtualization, not everything needs to be fixed by the BackupPC (or 
any other backup) hammer.  That's the fun of having all those extra layers 
between the OS and the hardware!  :)  (And if you have real SAN hardware, 
it can get *really* fun:  can you say thin-on-thin provisioning?  It's 
like shorting stocks on margin!  :) )

Timothy J. Massey

 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Les Mikesell
On 6/7/2011 2:34 PM, Timothy J Massey wrote:

   While the typical use would be to revive the latest copy after some sort
   of disaster, I wouldn't rule out wanting older versions too. For
   example if you had a security intrusion or an update-gone-wrong, you
   might want to back out to something older and known-good.

 Have you actually used virtualization?

Yes, of course - doesn't everyone...

 This sounds OK in theory, but not
 in practice! :)

 That is what live snapshots are for.

I like my backups to serve double-duty.  That is, to cover both 
host/disk issues and any form of file corruption/deletion.  VM-managed 
snapshots don't.

 How long are you going to want to
 go back? Trust me, if you don't want to go back within a few hours, you
 are *NOT* going to want to go back even days later. In which case, your
 aged backups are *USELESS*.

Not true.  I've had reasons to use both virtual and physical disk image 
backups that were years old.  The trick is to not store data in the same 
place as the programs.

 With virtualization, not everything needs to be fixed by the BackupPC
 (or any other backup) hammer. That's the fun of having all those extra
 layers between the OS and the hardware! :) (And if you have real SAN
 hardware, it can get *really* fun: can you say thin-on-thin
 provisioning? It's like shorting stocks on margin! :) )

Sure, but would you rather pay for a remote-sync SAN with snapshots or a 
box running backuppc somewhere?   But that brings up a slightly related 
topic: has anyone looked at the recent freeNAS beta to see if its remote 
replication would work for a backuppc archive (as in zfs snapshot 
incrementals...)?

-- 
Les Mikesell
 lesmikes...@gmail.com

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best version of rsync.exe to use with cygwin?

2011-06-07 Thread Steven Johnson
Mind sharing with me a zip of the package? 

-Steven

-Original Message-
From: Michael Stowe [mailto:mst...@chicago.us.mensa.org] 
Sent: Tuesday, June 07, 2011 2:59 PM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] Best version of rsync.exe to use with cygwin?


It's not just rsync.exe that's important, it's also your cygwin dll's. 
That said, this is the version that works well for me:

rsync  version 3.0.3  protocol version 30 Copyright (C) 1996-2008 by Andrew
Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints,
socketpairs, hardlinks, symlinks, no IPv6, batchfiles, inplace,
append, ACLs, no xattrs, iconv, symtimes, preallocation

My Posix Emulation DLL is 1.7.0, which is equally, if not more, important.


 Greetings what's the best version of Rsync.exe to use on Windows 
 machine with Cygwin. I've been using version 2.6.8 but experience very 
 slow transfer rates and often get the file has vanished:.. error in 
 my xfer logs.
 They're up to version 3.08 of rsync, does anyone have experience with 
 this version. Does anyone have a precompiled exe that I can test/use? 
 Thanks in advance!

 -Steven



--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image Editing
and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best version of rsync.exe to use with cygwin?

2011-06-07 Thread Jeffrey J. Kosowsky
Steven Johnson wrote at about 14:45:09 -0400 on Tuesday, June 7, 2011:
  Greetings what's the best version of Rsync.exe to use on Windows machine
  with Cygwin. I've been using version 2.6.8 but experience very slow transfer
  rates and often get the file has vanished:.. error in my xfer logs.
  They're up to version 3.08 of rsync, does anyone have experience with this
  version. Does anyone have a precompiled exe that I can test/use? Thanks in
  advance!

Most importantly in my experience, ver 2.x has issues with long file
names; also, it is less memory efficient and slower since I believe it
reads in the entire file list before starting. The bottom line is that
even though BackupPC doesn't use all the features of 3.x (it still
uses the older protocol version), it can benefit from bug fixes,
speed, and efficiency.

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Restore Files Newer Than Date

2011-06-07 Thread Gene Cooper

Hi,

I've been using BackupPC (3.1.0 currently) for a few years quite 
successfully.  Many thanks to all developers and contributors!


How can I perform a restore of all files changed yesterday?

I had a server fail today, but there was a full backup done last night. 
 It's many gigabytes over a WAN connection.


I had a separate local-disk backup system, which I refer to as Level 1, 
which I used to restore the server to 'the day before yesterday'.  But I 
need to restore 'yesterday' from BackupPC over the WAN.


I've searched the archives and the wiki, and Googled.

I can't help but think there is some clever command line that will do 
this for me...


Thanks in advance,

G

--

===
Gene Cooper
Sonora Communications, Inc.
936 W. Prince Road
Tucson, AZ  85705

(520) 407-2000 x101
(520) 888-4060 fax

gcoo...@sonoracomm.com
attachment: gcooper.vcf

smime.p7s
Description: S/MIME Cryptographic Signature
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc and Win7

2011-06-07 Thread Kris Lou
Can you access the share via standalone smbclient, since that's how you're
choosing to back things up?  As I recall, there are some registry changes
necessary for Windows 7 but that might only be for domain membership.

http://wiki.samba.org/index.php/Windows7

Kris Lou
k...@themusiclink.net


On Tue, Jun 7, 2011 at 8:41 AM, ow...@netptc.net wrote:



 - Original Message -
 *From: *Holger Parplies wb...@parplies.de
 *To: *General list for user discussion, questions and 
 supportbackuppc-users@lists.sourceforge.net
 *Sent: *6/6/2011 10:03:45 PM
 *Subject: *Re: [BackupPC-users] backuppc and Win7

 Hi,

 higuita wrote on 2011-06-05 22:10:28 +0100 [Re: [BackupPC-users] backuppc
 and Win7]:
  On Sun, 5 Jun 2011 13:15:16 -0700, ow...@netptc.net wrote:
   all other machines on the LAN) and pings work fine. Backuppc fires off
   the error Get fatal error during xfer (no files dumped for share C$).
   Any suggestions?
 
  Yes... share the c drive... depending on your config, you might
  not have the c$ share enabled by default (or restricted by the firewall
  and network settings).

 shouldn't that lead to a different error? As I read the source, no files
 dumped for share is produced for a dump where no other error was detected,

 yet no files were read - perhaps due to insufficient permissions to access
 any files (oh, and it's *Got* fatal error ..., so we know it's only an
 approximation of the actual error message ;-).

 More context from the XferLOG would probably help in determining the actual

 problem. Are there error messages which BackupPC couldn't interpret leading

 up to the no files dumped ...? Have you tried running the exact same
 command BackupPC is trying from the command line? Can you connect to the
 client machine with 'smbclient' (something like
 'smbclient //host/C\$ -U backup-user', if 'backup-user' is what BackupPC is

 using; see the man page for details) and list the contents of the share?

 Regards,
 Holger

 Holger and Tom et al
 Due the the frequent MS warnings about sharing the C drive I decided to
 merely create a shared folder called Users.  This folder is accessible from
 the 3 other machines on the network (XP and Vista) without difficulty.  When
 I attempt to do a full backup I again get the same error message.  Here is
 the full error from the Log:

 Running: /usr/bin/smbclient larryshp\\Users -I 10.10.10.101 -U backuppc 
 -E -d 1 -c tarmode\ full -Tc -
 full backup started for share Users
 Xfer PIDs are now 3355,3354
 session setup failed: SUCCESS - 0
 session setup failed: SUCCESS - 0
 tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
 filesTotal, 0 sizeTotal
 Got fatal error during xfer (No files dumped for share Users)
 Backup aborted (No files dumped for share Users)
 Not saving this as a partial backup since it has fewer files than the prior 
 one (got 0 and 0 files versus 0)

 The computer name and IP address are correct but the session setup fails.
  Any suggestions of where to go from here?  BTW thanks Tom for the rsync
 instructions but since I already have 3 machines operation successfully I'd
 like to try the samba approach before I throw the baby in the sea.

 Larry
 --

 EditLive Enterprise is the world's most technically advanced content
 authoring tool. Experience the power of Track Changes, Inline Image
 Editing and ensure content is compliant with Accessibility Checking.
 http://p.sf.net/sfu/ephox-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki: http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



 --
 EditLive Enterprise is the world's most technically advanced content
 authoring tool. Experience the power of Track Changes, Inline Image
 Editing and ensure content is compliant with Accessibility Checking.
 http://p.sf.net/sfu/ephox-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore Files Newer Than Date

2011-06-07 Thread Holger Parplies
Hi,

Gene Cooper wrote on 2011-06-07 16:28:01 -0700 [[BackupPC-users] Restore Files 
Newer Than Date]:
 [...]
 I had a server fail today, but there was a full backup done last night. 
  It's many gigabytes over a WAN connection.
 
 I had a separate local-disk backup system, which I refer to as Level 1, 
 which I used to restore the server to 'the day before yesterday'.  But I 
 need to restore 'yesterday' from BackupPC over the WAN.
 [...]
 I can't help but think there is some clever command line that will do 
 this for me...

after writing a rather complicated reply I find myself wondering whether a
plain restore won't do just what you want, presuming the backup is configured
as an rsync(d) backup, which it almost certainly is. As you are using rsync as
the transfer method, you should be transferring only file deltas over the
WAN, though you'll probably be reading all files on both sides in the style of
a full backup.

Presuming that is, for some obscure reason, not the case, here are my original
thoughts:

If you've got enough space, you could do a local restore to a temporary
directory on the BackupPC server (or any other host on the BackupPC server's
local network) and then use rsync to transfer exactly the missing changes over
the WAN (remember the --delete options!). If you don't, you could restore only
the files changed after a certain time to a temporary directory on the BackupPC
server and then rsync that over (note that you won't be able to get rid of
files deleted yesterday, though, so you won't get *exactly* the state of the
last backup). That would be an invocation of BackupPC_tarCreate, piped into
tar with the '-N' option ('--newer=date'). If you don't have the disk space
even for that, you could play around with doing it on an sshfs mount of the
target host, though that will obviously lose any rsync savings for the files
you are restoring.
I don't know of any filter that would reduce a tar stream to only files newer
than a specific date (and remember, you want the deletions from yesterday,
too).

The first option [referring to the local restore + rsync] is both simpler and
less error-prone, so use that if [the plain restore doesn't do what you want
and] you have the space available. If you need help on the syntax of
BackupPC_tarCreate, feel free to ask.

Hope that helps.

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] (no subject)

2011-06-07 Thread akeem abayomi
http://aeniith.com/enterin.html--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/