Re: [BackupPC-users] lost pc/*/backup file

2010-08-31 Thread Robert Strötgen
Solved, just found /usr/share/backuppc/bin/BackupPC_fixupBackupSummary

Robert

On 01.09.2010 00:19, Robert Strötgen wrote:
> Hi,
>
> during a crash I lost the file pc//backup (and backup.old). Is
> there a way to restore the file (automatically or manually from
> pc///backupInfo)?
>
> Thanks and best regards
> Robert


--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] lost pc/*/backup file

2010-08-31 Thread Robert Strötgen
Hi,

during a crash I lost the file pc//backup (and backup.old). Is 
there a way to restore the file (automatically or manually from 
pc///backupInfo)?

Thanks and best regards
Robert

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Anyone using r1soft hotcopy as a pre/post backup command?

2010-08-31 Thread Kameleon
Ok, I tested the following and it worked. Here is what I used for others
future reference>

DumpPreUserCmd: $sshPath -q -x -l root $host /usr/sbin/hcp /dev/sda1
(/dev/sda1 is my / partition so I run hcp against that)
DumpPostUserCmd: $sshPath -q -x -l root $host /usr/sbin/hcp -r /dev/hcp1
(this stops the snapshot and unmounts it in one fell swoop)
RsyncShareName: /var/hotcopy/sda1_hcp1 (or whatever your version of hcp
mounts the snapshot to)

Obviously I an using rsync to back up this machine. I have tested backups
and restores. The restore you just have to tell it to restore to / instead
of the /var/hotcopy/. path and you should be good.

I hope this helps someone in the future.

Donny B.


On Tue, Aug 31, 2010 at 11:03 AM, Kameleon  wrote:

> I am looking at using r1soft's hotcopy to enable a snapshot of the
> filesystem before backuppc does it's magic. Similar to how Microsoft Volume
> shadow copy works or LVM's snapshots. What I am wondering is this: Does
> anyone currently use this type of setup and if so, would you mind sharing
> your pre/post commands so I can compare to what I am thinking. Thanks in
> advance/
>
> Donny B.
>
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup backuppc

2010-08-31 Thread Les Mikesell
On 8/31/2010 12:09 PM, Josh Malone wrote:
> Farmol SPA wrote:
>> Hi list.
>>
>> I would like to ask which is the simplest yet effective way to dump
>> backuppc stuff (mainly __TOPDIR__) eg to a removable hard disk that will
>> be used in a disaster recovery scenario where the plant were destroyed
>> and I need to restore data from this survivor device. Is a "rsync -aH"
>> enough?
>>
>> TIA.
>> Alessandro
>
> If your 'topdir' is its own filesystem, lvm, etc., you can use 'dump' to
> back up a snapshot of your pool. Adapting the link earlier:
>
> #!/bin/bash
>
> EXTDISK=/dev/sdc
> POOLDISK=/dev/mapper/Group0-Vol0
>
> lvcreate -l '100%free' -s -n snapshot /dev/volgroup/backuppc
> mount $EXTDISK /mnt/tmp
> dump -a0f - /dev/volgroup/backuppc | gzip >
> /mnt/tmp/pool-backup.0.gz

But time a restore before deciding whether the speed will be acceptable 
or not.  I think dump can write the inode tables fairly quickly, but the 
restore side has to do lookups to rebuild the hard links and might take 
days to complete.

-- 
   Les Mikesell
 lesmikes...@gmail.com





--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup backuppc

2010-08-31 Thread Josh Malone

Farmol SPA wrote:

 Hi list.

I would like to ask which is the simplest yet effective way to dump
backuppc stuff (mainly __TOPDIR__) eg to a removable hard disk that will
be used in a disaster recovery scenario where the plant were destroyed
and I need to restore data from this survivor device. Is a "rsync -aH"
enough?

TIA.
Alessandro


If your 'topdir' is its own filesystem, lvm, etc., you can use 'dump' to back 
up a snapshot of your pool. Adapting the link earlier:


#!/bin/bash

EXTDISK=/dev/sdc
POOLDISK=/dev/mapper/Group0-Vol0

lvcreate -l '100%free' -s -n snapshot /dev/volgroup/backuppc
mount $EXTDISK /mnt/tmp
dump -a0f - /dev/volgroup/backuppc | gzip >
   /mnt/tmp/pool-backup.0.gz



--

   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263   www.nrao.edu
434-249-5699 (mobile)
BOFH excuse #327:

The POP server is out of Coke



smime.p7s
Description: S/MIME Cryptographic Signature
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Anyone using r1soft hotcopy as a pre/post backup command?

2010-08-31 Thread Kameleon
I am looking at using r1soft's hotcopy to enable a snapshot of the
filesystem before backuppc does it's magic. Similar to how Microsoft Volume
shadow copy works or LVM's snapshots. What I am wondering is this: Does
anyone currently use this type of setup and if so, would you mind sharing
your pre/post commands so I can compare to what I am thinking. Thanks in
advance/

Donny B.
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why does backuppc transfer files already in the pool

2010-08-31 Thread martin f krafft
Here is how I think an algorithm could work, which requires no
changes on the remote side, so it should work with the plain rsync
protocol, but likely will require some changes to rsyncp.

Let:
  B   be the BackupPC_dump process
  P   be the peer (PC to be backed up)
  R   be the remote rsync process
  F   be a file on P that backuppc wants to save

  Gc, Gc2be two compressed files in the same chain in the server pool
  G1, G2  be their uncompressed data
  F[x], G1[x] be the xth block of F or G1 respectively

  HR(b) be the rsync rdelta-hash of block b
  HL(b) be backuppc's hash-for-pooling of block b



0. B has already contacted P and R has been spawned. Maybe some
   files have already been transferred;
   Now B arrives at F and determines that it need to transfer the
   file (i.e. it's new, or the attributes have changed).

1. Since we need the first 8 blocks to determine the pool chain, we
   just receive the first 8 blocks. Depending on the protocol, this
   possibly requires us to ask for the hashes first, and to pretend
   that each hash simply does not match out (fake) local hashes:

newfile = [];   // array of blocks

for( x=0 ; x < 8 ; ++x ) {

  tmphash = recv( HR( F[x] ));
  if tmphash == IMPOSSIBLE_HASH { explode(); }
  newfile += recv( F[x] );
}

2. Okay, now we have the first 8 blocks and can determine the pool
   chain, if there is one:

localfiles = []
localfiles = get_pool_files( newfile[0], newfile[7] )

3. And now iteratively ask for hashsums from the remote, compare
   them to the corresponding blocks in the pool files (which may be
   cached!), and assemble the new file:

x = 8
while !F.eof() {

  tmphash = recv( HR( F[x] ));
  tmpblock = "";

  for G in localfiles {  // iterate G1, G2, …
if tmphash == HR( G[x] ) {
  tmpblock = G[x];
  break;
  /* possible optimisation: replace localfiles with just
   * localfile so that for successive blocks, not all local
   * files have to be compared.
   */
}
  }

  if !tmpblock {
tmpblock = recv( F[x] );
  }

  newfile += tmpblock;

}

4. At the very end, determine if the new data is identical to
   existing data and hardlink. This can either be done with
   a full-file hashsum (to be safe), or the above algorithm can
   determine that while it's running.



If this logic cannot be implemented inside or atop of rsyncp, then
maybe the following logic would work:

1. Create a buffer in memory and fill it with zeroes;

2. Pass this buffer to rsyncp and ask it to synchronise F into it;

3. After 8×128k blocks have been determined:

   a. determine if there is a corresponding pool file;

   b. if yes, uncompress it and write it into the buffer;

4. Let rsyncp continue, with a potentially modified buffer.


Thoughts?

-- 
martin | http://madduck.net/ | http://two.sentenc.es/
 
"i've got a bike,
 you can ride it if you like,
 it's got a basket. a bell that rings, and things to make it look good.
 i'd give it to you if i could,
 but i borrowed it."
  -- syd barrett, 1967
 
spamtraps: madduck.bo...@madduck.net


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Host Summary page never loads

2010-08-31 Thread Mark Farmer
Hi

I'm having a problem with the BackupPC web interface host summary page which 
with a full list of hosts configured (24 hosts) never loads when I click on the 
link. Reducing the list of hosts in /etc/Backuppc/hosts to a single host, the 
page loads after a long delay (about 30 seconds). The odd thing is that I have 
another server in another location which works fine and all hosts are reachable 
from both servers.

The backups seem to be running OK although they do seem rather slow in some 
cases.

The OS is Debian 5 with all updates applied and the BackupPC version is 3.1.0 
on both servers.
The storage space is on a SAN mounted via NFS with the uid & gid matched on the 
SAN & BackupPC server (this is the same for both servers).

Does anyone have any ideas as to what might be causing this problem?

Thanks and regards
Mark.

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/