Re: [BackupPC-users] Understanding pros/cons of full vs. incremental and filled vs. unfilled for v4

2019-04-22 Thread Carl Soderstrom
If it is faster to duplicate the last filled backup and then update it,
would it be reasonable for a future version of BackupPC to pre-duplicate the
last backup during one of the maintenance operations? Then it would be
possible to simply update that when the time comes. 

Or am I misunderstanding this?

On 04/21 05:15 , Craig Barratt via BackupPC-users wrote:
> I haven't done testing to see if having 100% fulls would be faster.  On my
> ext4 system running on sw raid 10, it is actually quite slow duplicating a
> filled backup (which is the required step prior to starting a backup when
> you want the prior one to remain filled), since the whole directory tree
> has to be traversed.  So that part is definitely slower.  However, you are
> right that, after that, the backup is somewhat simpler since it is only
> modifying the current backup tree in place, since there's no need to update
> the prior unfilled backup with the reverse deltas.  Another minor advantage
> of only having filled backups is deleting any one of them is easier, as you
> note.  Currently BackupPC needs to merge deltas into the immediate prior
> backup (if unfilled) when you deleted a backup.
> 
> Craig


-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Failing incremental backups on Windows 10 (backuppc 3.x)

2019-06-04 Thread Carl Soderstrom
On 06/04 01:34 , Alan Ballard wrote:
> Running a packet trace with WireShark was more informative.  It showed the
> Windows client getting transmission timeouts and retransmitting; then after
> 5 retransmits terminating the connection.  An interesting thing I noticed
> was that the timeout interval was quite short... about 300msec.  It turns
> out that Windows tunes the timeout based on the observed response on the
> connection.  In this case, at the start of the rsync operation there are
> many short fast interactions, then the server bogs down as it does the delta
> copies of a large file.

Huh, that's really interesing. Thanks for the troubleshooting!

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Howto backup using a "jump host"

2019-06-21 Thread Carl Soderstrom
On 06/21 06:07 , Patrick Bégou wrote:
> Hello
> 
> I'm using BackupPC 3 for many local PCs. I would like now to backup some
> data from a remote server, located in another entity and where I need to
> go throught a frontend
> 
> BackupPC <<--->> Frontend <<->> serveur-with-data


The way I solved this, after a couple of years of experimentation is with
'autossh'. I installed the autossh package and configured a file called
/etc/autossh.hosts with the following content:

-NL 2233:10.11.22.1:22 firewall.example.com -l rte
-NL :192.168.111.1:22 firewall.example2.com -l rte

At boot time this will start SSH tunnels to two different remote networks by
logging into Linux firewalls on those networks. It will also keep those
tunnels alive if they go down.

In my per-host BackupPC config files I use lines like this:

$Conf{RsyncClientCmd} = '$sshPath -q -x -l rtebakup -p 2233 127.0.0.1
/usr/bin/sudo /usr/bin/rsync $argList+';

and this:

$Conf{RsyncClientCmd} = '$sshPath -q -x -l rsyncbackup -p  127.0.0.1
/usr/bin/sudo /usr/bin/rsync $argList+';

This has been working fine for several years now.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Howto backup using a "jump host"

2019-06-22 Thread Carl Soderstrom
On 06/22 01:57 , G.W. Haywood via BackupPC-users wrote:
> Hi there,
> 
> On Sat, 22 Jun 2019, Carl Soderstrom wrote:
> > On 06/21 06:07 , Patrick B?gou wrote:
> > > 
> > > I'm using BackupPC 3 for many local PCs. I would like now to backup some
> > > data from a remote server, located in another entity and where I need to
> > > go throught a frontend
> > > 
> > > BackupPC <<--->> Frontend <<->> serveur-with-data
> > 
> > The way I solved this, after a couple of years of experimentation is with
> > 'autossh'. ...
> 
> Over almost two decades I've done similar things many times and in
> different ways using OpenVPN to create encrypted tunnels.  It isn't
> clear to me what the OP's 'Frontend' might be, and what effects that
> may have on use of things like SSH/VPN tunnels.  If it's relevant he
> might want to elaborate.

I've used OpenVPN as well, and while it works, for the limited case of
setting up a tunnel for just one port, autossh is simpler to set up. Still,
if the capability and knowledge is there, OpenVPN works too.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Recommended settings for BackupPC v4 on ZFS

2019-09-10 Thread Carl Soderstrom
We've been a BackupPC v3 shop since there's been a v3 and we're looking at
building our first v4 BackupPC server. The boss wants to put it on ZFS and a
JBOD controller.

I believe that for BackupPC v3 the advice was to turn off ZFS
filesystem-level deduplication and compression. 

Is that still true for BackupPC v4?
Are there any other suggestions for filesystem settings for ZFS when hosting
a BackupPC pool?

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recommended settings for BackupPC v4 on ZFS

2019-09-10 Thread Carl Soderstrom
Thanks for the advice. Anyone else care to share their experience?

On 09/10 02:57 , Ray Frush wrote:
> We backup to a ZFS based appliance, and we allow ZFS to do compression and 
> disable compression in BackupPC.  We do not allow ZFS to de-duplicate.   
> However since you’re looking at doing ZFS on the same box that’s running 
> BackupPC it probably doesn’t matter which one you have compression turned on 
> since it’s the same processors., just don’t do both.
> 
> 
> 
> 
> 
> > On Sep 10, 2019, at 12:26, Carl Soderstrom  
> > wrote:
> > 
> > We've been a BackupPC v3 shop since there's been a v3 and we're looking at
> > building our first v4 BackupPC server. The boss wants to put it on ZFS and a
> > JBOD controller.
> > 
> > I believe that for BackupPC v3 the advice was to turn off ZFS
> > filesystem-level deduplication and compression. 
> > 
> > Is that still true for BackupPC v4?
> > Are there any other suggestions for filesystem settings for ZFS when hosting
> > a BackupPC pool?
> > 
> > -- 
> > Carl Soderstrom
> > Systems Administrator
> > Real-Time Enterprises
> > www.real-time.com
> > 
> > 
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> 
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recommended settings for BackupPC v4 on ZFS

2019-09-11 Thread Carl Soderstrom
Yeah, 'noatime' is a good idea for BackupPC in general.
Thanks for the advice on compression, it's good to know.

On 09/11 11:26 , Robert Trevellyan wrote:
> One more thing about ZFS in general - I always set noatime.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recommended settings for BackupPC v4 on ZFS

2019-09-11 Thread Carl Soderstrom
On 09/11 09:40 , Alexander Moisseev via BackupPC-users wrote:
> On 11.09.2019 18:19, Robert Trevellyan wrote:
> > I'm letting ZFS do the compression (using the default of LZ4) with BackupPC 
> > handling deduplication. I think you'll find a reasonable consensus that ZFS 
> > compression is always a win for storage space (it will store 
> > un-compressible data unmodified), whereas ZFS deduplication is best avoided 
> > in most cases, mostly due to its high memory usage. It's possible that 
> > BackupPC compression would be tighter than LZ4,
> 
> Actually, on ZFS you are not limited to LZ4, but in ZFS each file block is 
> compressed independently, that is why in most cases BackupPC compression is 
> higher, though it depends on data.
> 
> We moved from 77.96G cpool to pool on compressed filesystem recently. Now it 
> consumes 81.2G, so there is not much difference.
> 
> # zfs get compression,compressratio,recordsize,referenced zroot/bpc/pool
> NAMEPROPERTY   VALUE SOURCE
> zroot/bpc/pool  compressiongzip-3local
> zroot/bpc/pool  compressratio  3.87x -
> zroot/bpc/pool  recordsize 128K  default
> zroot/bpc/pool  referenced 81,2G -

Thanks Alexander, those details are really helpful.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup at different times

2017-09-01 Thread Carl Soderstrom
> From: Orazio Di nino 
>
> Hello everyone,
> I would like to back up two hosts at different times.
> es. host1 at 10:00 PM and host2 at 4:00 PM.
> How do I configure this?

The easiest way is to set a cron job like the one below. The '0' at the end 
means to do an incremental backup, a '1' would mean to do a full backup. This 
will force the job to go off at a particular time.

You will also want to set a job to do a full backup occasionally, to do a more 
thorough check of the files being backed up against the copies in your backup 
pool.

# mysql-2 is set to dump its database at 8:15pm (20:15)
# let's do a backup right after that.
20 20 * * * backuppc /usr/share/backuppc/bin/BackupPC_serverMesg backup 
mysql-2.example.com mysql-2.example.com backuppc 0

--
Carl Soderstrom
Real Time Enterprises
952-943-8700
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] cygwin-rsyncd-3.1.2.1: service cygrunsrv

2017-09-18 Thread Carl Soderstrom
> From: Pascal Mosimann 

> I have a dozen of Windows XP clients that I would like to upgrade with
> the latest cygwin-rsyncd-3.1.2.1. But when I launch the executable
> files, I get the message "Not a valid Win32 application".
> Does anybody know how to compile cygwin-rsyncd to run on Windows XP?

It may be easiest to go back to an earlier version of Cygwin. Cygwin dropped 
support for WXP a while ago.
Alternatively, you can do BackupPC backups over SMB.

--
Carl Soderstrom
Real Time Enterprises
952-943-8700
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] syncing local and cloud backups

2018-10-22 Thread Carl Soderstrom
On 10/21 05:01 , ED Fochler wrote:
> > aren't you increasing the exposure of your production system X2 by giving 
> > another backup process access to it?
> 
> Yes.  And it's the right thing to do.  Because a production failure with 
> rapid recovery is manageably bad.  Having your production and backups 
> encrypted by ransomware is a business-ending catastrophe.  I have an 
> explanation, but if that much makes sense to you, you don't need to read on.
> 
>   ED.
> 
> 
> Redundant systems generally increase the likelihood of nuisance failure, but 
> decrease the likelihood of catastrophic failure.  This case is no different.  
> By having two separate backup servers in different locations, maybe with 
> different admins, you are exposing the primary machines to double the risk by 
> having 2 independent methods of access.  Assuming your risk was near zero, 
> doubling it shouldn't be so bad.  So yeah, there's a greater risk of 
> potential disruption by having multiple methods of access.  x2.  Also x2 
> network bandwidth.
> 
> Assuming the risk of having your backup server compromised is near (but not 
> quite) zero, then you are looking at a non-zero chance of everything you care 
> about getting mangled by a malicious entity who happened to crack a single 
> machine.  That's a non-zero chance at total, business-ending failure.  Having 
> a separate backup enclave means that killing production and backups 
> simultaneously would require 2 near-zero possibility hacks occurring in rapid 
> succession.  0.0001^2
> 
> So the risk of simple failure, with reasonable recovery is twice as likely.  
> But the probability of production and backups getting destroyed at once goes 
> down exponentially.  Other solutions that are similarly over-cautious in 
> industry include tape backups going into cold storage, mirrored raid sets 
> with drives that get pulled and stored in safety deposit boxes, etc.  It may 
> be overkill, and that's your call.  I will continue to suggest it though.  
> Hacking and ransomware are growing problems.  Single backup solutions guard 
> well against accidents and hardware failure.  To guard against mischief and 
> corruption, you want two, and you want them isolated from each other.  
> Perhaps from different vendors or using different technologies.
> 
>   Thank you for reading.  I am recovering from back surgery and find 
> myself with more free time than usual.  :-)
> 
>   Ed the long-winded self important explainer and promoter of security 
> practices.

Ed, thanks for posting that. I will save it, edit it, and repost to
customers when the time is appropriate. :)

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Long time user helpless now!

2019-02-07 Thread Carl Soderstrom
10 inode
> Xfer PIDs are now
> Executing DumpPostUserCmd: /opt/bin/ssh -q -x -l root agora.domain
> /boot/custom/scripts/unraid_post_backuppc
> 523e611fda90
> 4daf85bbc9a2
> 5bf24587861e
> 324c01f22a5d
> eaa59c68300d
> e17f1b8f52b0
> 1240621b6adb
> 8a05b274e7b9
> da375953cde2
> 37eb6503f86d
> da13f78c91ef
> 3782d35739bc
> Got fatal error during xfer (No files dumped for share /)
> Backup aborted (No files dumped for share /)
> BackupFailCleanup: nFilesTotal = 4, type = full, BackupCase = 1, inPlace =
> 1, lastBkupNum =
> Keeping non-empty backup #0 (/share/MD0_DATA/BackupPC/pc/agora.domain/0)
> Running BackupPC_refCountUpdate -h agora.domain -f on agora.domain
> Xfer PIDs are now 24881
> BackupPC_refCountUpdate: agora.domain #0 inodeLast set to 10 (was 1)
> BackupPC_refCountUpdate: host agora.domain got 0 errors (took 1 secs)
> Xfer PIDs are now
> Finished BackupPC_refCountUpdate (running time: 1 sec)
> Xfer PIDs are now
> 
> *The conf file of the client:*
> 
> $Conf{DumpPostUserCmd} = '$sshPath -q -x -l root $host
> /boot/custom/scripts/unraid_post_backuppc';
> $Conf{DumpPreUserCmd} = '$sshPath -q -x -l root $host
> /boot/custom/scripts/unraid_pre_backuppc';
> $Conf{BlackoutPeriods} = [
>   {
>     'hourEnd' => '19.5',
>     'hourBegin' => '7.9',
>     'weekDays' => [
>   1,
>   2,
>   3,
>   4,
>   5
>     ]
>   },
>   {
>     'weekDays' => [
>   0,
>   6
>     ],
>     'hourBegin' => '7.9',
>     'hourEnd' => 14
>   }
> ];
> $Conf{BackupFilesOnly} = {
>   '/' => [
>     '/mnt/user/appdata',
>     '/boot',
>     '/mnt/user/nextcloud',
>     '/mnt/user/CentOS-Data',
>     '/mnt/user/WINTESS'
>   ]
> };
> $Conf{BackupFilesExclude} = {
>   '/' => [
>     '/mnt/user/CentOS-Data/9.6'
>   ]
> };
> $Conf{EMailAdminUserName} = 'h...@domain.com';
> $Conf{EMailFromUserName} = 'backuppc@QNAP859U';
> $Conf{EMailUserDestDomain} = '@domain.com';
> $Conf{FullPeriod} = '29.97';
> $Conf{BackupZeroFilesIsFatal} = '0';
> 
> Thank you in advance
> 


> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Long time user helpless now!

2019-02-07 Thread Carl Soderstrom
0 bytes
> File list size: 76
> File list generation time: 0.001 seconds
> File list transfer time: 0.000 seconds
> Total bytes sent: 247
> Total bytes received: 114
> sent 247 bytes  received 114 bytes  722.00 bytes/sec
> total size is 0  speedup is 0.00
> DoneGen: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 
> 0 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 12 inode
> Xfer PIDs are now
> xferPids
> Got fatal error during xfer (No files dumped for share /)
> cmdSystemOrEval: about to system /opt/bin/sudo -u admin /opt/bin/ping -c 1 -w 
> 3 agora.domain
> cmdSystemOrEval: about to system /opt/bin/sudo -u admin /opt/bin/ping -c 1 -w 
> 3 agora.domain
> CheckHostAlive: ran '/opt/bin/sudo -u admin /opt/bin/ping -c 1 -w 3 
> agora.domain'; returning 0.222
> Backup aborted (No files dumped for share /)
> __bpc_progress_state__ fail cleanup
> BackupFailCleanup: nFilesTotal = 0, type = full, BackupCase = 6, inPlace = 1, 
> lastBkupNum =
> BackupFailCleanup: inPlace with no new files... no cleanup
> __bpc_progress_state__ fsck
> Running BackupPC_refCountUpdate -h agora.domain -f on agora.domain
> cmdSystemOrEval: about to system 
> /opt/local/backuppc/bin/BackupPC_refCountUpdate -h agora.domain -f
> Xfer PIDs are now 11390
> xferPids 11390
> __bpc_progress_state__ refCnt #0
> __bpc_progress_state__ cntUpdate #0
> __bpc_progress_state__ rename #0
> __bpc_progress_state__ refCnt #1
> __bpc_progress_state__ cntUpdate #1
> __bpc_progress_state__ rename #1
> __bpc_progress_state__ sumUpdate
> __bpc_progress_fileCnt__ 0/128
> __bpc_progress_fileCnt__ 8/128
> __bpc_progress_fileCnt__ 16/128
> __bpc_progress_fileCnt__ 24/128
> __bpc_progress_fileCnt__ 32/128
> __bpc_progress_fileCnt__ 40/128
> __bpc_progress_fileCnt__ 48/128
> __bpc_progress_fileCnt__ 56/128
> __bpc_progress_fileCnt__ 64/128
> __bpc_progress_fileCnt__ 72/128
> __bpc_progress_fileCnt__ 80/128
> __bpc_progress_fileCnt__ 88/128
> __bpc_progress_fileCnt__ 96/128
> __bpc_progress_fileCnt__ 104/128
> __bpc_progress_fileCnt__ 112/128
> __bpc_progress_fileCnt__ 120/128
> __bpc_progress_state__ rename total
> BackupPC_refCountUpdate: host agora.domain got 0 errors (took 0 secs)
> Xfer PIDs are now
> xferPids
> Finished BackupPC_refCountUpdate (running time: 0 sec)
> Xfer PIDs are now
> xferPids
> dump failed: No files dumped for share /
> 
> 
> 
> 
> On 7.02.2019 21:44, Carl Soderstrom wrote:
> > What happens if you stop the docker images manually, and disable the
> > DumpPreUserCmd, then try running the backup? (Possibly with a command like:
> > "/usr/share/backuppc/bin/BackupPC_dump -f -v host.example.tld" to get faster
> > and more convenient feedback for debugging purposes).
> > 
> > Have you verified that there are files there to be backed up, after you run
> > the DumpPreUserCmd?
> > 
> > On 02/07 09:01 , sse450 wrote:
> > > Hi All,
> > > 
> > > I am a long time user of BackupPC. I had no problem so far with this
> > > fantastic software.
> > > 
> > > Now, I cannot backup one of my servers no matter what I tried. I would
> > > appreciate if you kindly help me.
> > > 
> > > My BackupPC server was all functional before relocation the server. 
> > > Surely,
> > > router etc were all changed at the new location. Now, I cannot backup just
> > > one of my servers. All the other clients are being backup just fine. 
> > > Please
> > > find below some information:
> > > 
> > > *Symptom: *I start Full backup. BackupPC executes DumpPreUserCmd (stops
> > > docker images), then log writes "full backup started for directory /".
> > > Immediately after this log line, BackupPC starts to execute
> > > "DumpPostUserCmd" (starts docker images). Then, the dreaded "Got fatal 
> > > error
> > > during xfer (No files dumped for share /) and Backup aborted (No files
> > > dumped for share /)" lines come up. Everything starts and finishes in 
> > > just a
> > > few seconds.
> > > 
> > > *General Server Information*
> > > The servers PID is 13019, on host 859U, version 4.3.0, started at 
> > > 2019-02-06
> > > 11:58.
> > > This status was generated at 2019-02-07 20:38.
> > > The configuration was last loaded at 2019-02-07 20:07.
> > > PCs will be next queued at 2019-02-07 21:00.
> > > Other info:
> > > 0 pending backup requests from last scheduled wakeup,
> > > 0 pending user backup requests,
> > > 0 pending command requests,
> > > Pool 

Re: [BackupPC-users] Restoring complete virtualized Windows-Servers / Saving MBR

2012-04-16 Thread Carl Soderstrom
- Original Message -
> From: "Andreas Piening" 
> One thing you stated about doing virtual disk image backups is that
> they're huge. As soon as anything in the virtual machine changes,
> the complete image needs to be stored again to keep the image backup
> in sync. That's what I want to avoid by using LVM volumes: I can
> create a R/O snapshot of them and mount them into the filesystem
> just like an external harddrive connected to the system. From there
> I can do a file-based backups which is way more space efficient.

I tried doing something like this once upon a time. Taking an LVM snapshot on 
an active disk system will kill your performance. It may work, but you may find 
that it's terribly slow. 

It's best to back up virtual machines by connecting to the virtual machine 
itself and backing up from there.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore entire Windows OS using rsync (using rescue disk)?

2011-01-03 Thread Carl Soderstrom
On Dec 29, 2010, at 7:36 PM, Ryan Blake wrote:
> Now that I finally got backups working as they should, I have ran into a 
> problem.  Right now we have a machine that has a great deal of problems 
> (drivers missing, programs are installed that we do not know how they got 
> there, etc.) and I was hoping to be able to restore using the rsync protocol. 

I've not paid a lot of attentiont to this thread so pardon me if I'm repeating 
anything but..
I've never done restores using rsync, I always use tar. 

briefly:
 * boot the target system with some kind of rescue CD
 * use BackupPC_tarCreate to create a stream of files in tar format from 
whichever backup you choose
 * pipe that through netcat to your target system
 * unpack the files on the target system by piping the output of netcat to tar

The problem with Windows of course is the registry. I've not tried a full 
restore of a Windows box; but I know it would not be easy. Even if you can 
rewrite the Windows bootloader and make it work, it will not necessarily have a 
consistent registry, so application installations may be broken anyway.

-- 
Carl Soderstrom--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/