[BackupPC-users] Matching files against the pool remotely.

2009-12-18 Thread Malik Recoing .
Hello,

I'm trying to optimize BackupPC for use over internet with a lot of client (say
100 per server). Clients run rsyncd and are connected via dsl of variable 
speed. 

Many discussions in this list helped me a lot. But I can't figure out one thing 
:
Does BackupPC use rsync features to skip a file allready in the pool _before_ it
uploaded it ? Or does he need to upload it first and then the file is matched
against the pool, eventualy replaced by a hard link ?

In the first case this will save bandwidth and disk, in the second case only
disk space. Is BackupPC able to match a file remotely ? 

The Holy Doc says ( Barratt:Desing:operation:2 ): it checks each file in the
backup to see if it is identical to an existing file from any previous backup of
any PC. It does this without needed to write the file to disk.

But it doesn't say without the need to upload the file in memory.

I know a file will be skiped if it is present in the previous backup, but what
appens if the file have been backed up for another host ?

Thank you for your enlightenments.

Malik.








--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] SMB Restore Issues - Trailing slashes reversed

2009-12-18 Thread Craig Connoll
Hi all.

I have backuppc installed and working as I want it too but I have an
issue when trying to restore a windows backup.

All permissions are correct and no failures except when restoring to a
window machines.

I think the problem is the trailing slashes. They are forward slashed
instead of back slashed which windows uses.

Original file/dir
172.16.10.222:/c$/Program Files/prog1/DataRetrieval 22_03_07.zip

Will be restored to
172.16.10.222:/c$/Program Files/prog1/DataRetrieval 22_03_07.zip


Now when I try to change the slashes during the restore procedure it
produces this:
172.16.10.222:/c$/\Program Files\AES Energy Tracker/DataRetrieval 22_03_07.zip

Is there some way to fix this?
Any help will be much appreciated

Regards,

Crashinit6
ICT Support
The City of London Academy

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Matching files against the pool remotely.

2009-12-18 Thread Tim Connors
On Fri, 18 Dec 2009, Malik Recoing. wrote:

 The Holy Doc says ( Barratt:Desing:operation:2 ): it checks each file in the
 backup to see if it is identical to an existing file from any previous backup 
 of
 any PC. It does this without needed to write the file to disk.

 But it doesn't say without the need to upload the file in memory.

 I know a file will be skiped if it is present in the previous backup, but what
 appens if the file have been backed up for another host ?

It is required to be uploaded first as otherwise there's nothing to
compare it to (yeah, I know, that's a pain[1]).

It might theoretically be sufficient to let the remote side calculate a
hash and compare it against the files in the pool with matching hashes,
and then let rsync do full compares against all the matching hashes in the
pool (since hash collisions happen), but I don't believe anyone has tried
to code this up yet, and it would only be of limited uses in systems that
were network bandwidth constrained rather than disk bandwidth constrained.

[1] I just worked around this myself by copying a large set of files onto
sneakernet (my USB key), copying them onto a directory on the local backup
server, backing that directory up, then moving the corresponding directory
in the backup tree into the previous backup of the remote system, so it
will be picked up and compared against the same files when that remote
system is next backed up.  I find out tomorrow whether that actually
worked :)


-- 
TimC
Computer screens simply ooze buckets of yang.
To balance this, place some women around the corners of the room.
-- Kaz Cooke, Dumb Feng Shui

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Matching files against the pool remotely.

2009-12-18 Thread Malik Recoing .
Tim Connors tim.w.connors at gmail.com writes:
 
 On Fri, 18 Dec 2009, Malik Recoing. wrote:
 
  I know a file will be skiped if it is present in the previous backup, but
  what
  appens if the file have been backed up for another host ?
 
 It is required to be uploaded first as otherwise there's nothing to
 compare it to (yeah, I know, that's a pain[1]).
 
 It might theoretically be sufficient to let the remote side calculate a
 hash and compare it against the files in the pool with matching hashes,
 and then let rsync do full compares against all the matching hashes in the
 pool (since hash collisions happen), but I don't believe anyone has tried
 to code this up yet, and it would only be of limited uses in systems that
 were network bandwidth constrained rather than disk bandwidth constrained.

I'm quite sure it will be an improvement for both. Globaly there will be no
overhead. More : the hash calculation will be kind of clustered delegating it
to the client. The matching of identical hash is anyway done by BackupPC_Link.
Thus BackupPC_Link will became pointless in a rsync-only configuration. The
disk and the network trafic will be reduced as many files won't be transfered at
all.

If such a feature exists, it will give BackupPC a magic touch, backing up a
wole tree of well known files in a minute even over a slow network. 

What a pity I'm not fluent with perl...


 [1] I just worked around this myself by copying a large set of files onto
 sneakernet (my USB key), copying them onto a directory on the local backup
 server, backing that directory up, then moving the corresponding directory
 in the backup tree into the previous backup of the remote system, so it
 will be picked up and compared against the same files when that remote
 system is next backed up.  I find out tomorrow whether that actually
 worked :)
 

I tougth of a similar solution. When your client are mostly full system tree
backups, you may have ready-to-copy backups of the differents OS tree. When a
new client is added, you copy the corresponding OS directory as it was the first
full backup.

Malik.





--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Matching files against the pool remotely.

2009-12-18 Thread Les Mikesell
Malik Recoing. wrote:

 I know a file will be skiped if it is present in the previous backup, but
 what
 appens if the file have been backed up for another host ?
 It is required to be uploaded first as otherwise there's nothing to
 compare it to (yeah, I know, that's a pain[1]).

 It might theoretically be sufficient to let the remote side calculate a
 hash and compare it against the files in the pool with matching hashes,
 and then let rsync do full compares against all the matching hashes in the
 pool (since hash collisions happen), but I don't believe anyone has tried
 to code this up yet, and it would only be of limited uses in systems that
 were network bandwidth constrained rather than disk bandwidth constrained.
 
 I'm quite sure it will be an improvement for both. Globaly there will be no
 overhead. More : the hash calculation will be kind of clustered delegating 
 it
 to the client. The matching of identical hash is anyway done by BackupPC_Link.
 Thus BackupPC_Link will became pointless in a rsync-only configuration. The
 disk and the network trafic will be reduced as many files won't be transfered 
 at
 all.

There are two problems: one is that the remote agent is a standard rsync 
binary that knows nothing about backuppc's hashes; the other is that 
hash collisions are normal and expected - and disambiguated by a full 
data comparison.

 I tougth of a similar solution. When your client are mostly full system tree
 backups, you may have ready-to-copy backups of the differents OS tree. When a
 new client is added, you copy the corresponding OS directory as it was the 
 first
 full backup.

Yes, if your remote machines are essentially clones of each other, you 
could create their pc directories as clones with a tool that knows how 
to make a tree of hardlinks.

A better solution might be to have a local machine at the site running 
backuppc and work out some way to get an offsite copy.  If bandwidth is 
such an issue, you are also going to have trouble doing a restore.  But, 
if you've followed this mail list very long you'd know that the 'offsite 
copy' problem doesn't have a good solution yet either.

-- 
   Les Mikesell
lesmikes...@gmail.com



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc performance 3Mb

2009-12-18 Thread Michael Stowe

 I tryed to backup via smb and via rsyncd but performance is always ~
 2Mb/sec

 I don't know if those perfomance are ok in my configuration, but it seems
 to me that are bad.

Are your drives capable of better than 2Mb/sec sustained throughput?

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc performance 3Mb

2009-12-18 Thread Les Mikesell
Michael Stowe wrote:
 I tryed to backup via smb and via rsyncd but performance is always ~
 2Mb/sec

 I don't know if those perfomance are ok in my configuration, but it seems
 to me that are bad.
 
 Are your drives capable of better than 2Mb/sec sustained throughput?

And if that is per-target, are you running several backups concurrently? 
More RAM might help, LVM hurts a bit.

-- 
   Les Mikesell
lesmikes...@gmail.com



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc performance 3Mb

2009-12-18 Thread David Young
Just curious, what's the max sustained throughput that anyone has seen with
their system?  2-3Mbit/s is 250KB/s which seems really slow.  I'm in the
process of setting up a local Backuppc server and now am concerned about
performance.

On Fri, Dec 18, 2009 at 9:25 AM, Les Mikesell lesmikes...@gmail.com wrote:

 Michael Stowe wrote:
  I tryed to backup via smb and via rsyncd but performance is always ~
  2Mb/sec
 
  I don't know if those perfomance are ok in my configuration, but it
 seems
  to me that are bad.
 
  Are your drives capable of better than 2Mb/sec sustained throughput?

 And if that is per-target, are you running several backups concurrently?
More RAM might help, LVM hurts a bit.

 --
   Les Mikesell
lesmikes...@gmail.com




 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
David
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc performance 3Mb

2009-12-18 Thread Sabuj Pattanayek
On Fri, Dec 18, 2009 at 1:27 PM, David Young randomf...@gmail.com wrote:
 Just curious, what's the max sustained throughput that anyone has seen with
 their system?  2-3Mbit/s is 250KB/s which seems really slow.  I'm in the
 process of setting up a local Backuppc server and now am concerned about
 performance.

I get 56MB/s (448mbps) over a gigE connection using tar over nfs for a
backup client with mostly large files.

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unexpected call BackupPC::Xfer::RsyncFileIO-unlink(...)

2009-12-18 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 13:11:37 -0500 on Monday, November 2, 2009:
  Unexpected call 
  BackupPC::Xfer::RsyncFileIO-unlink(cygwin/usr/share/man/man3/addnwstr.3x.gz)
  
  I encountered the above error in my backup logs - repeated hundreds of
  time though all interestingly only on different files in my
  C:\cygwin\usr\share\man\man3 directory.
  
  
  Now for context, this was part of a full backup after I reinstalled
  Windows on a laptop. I had been testing BackupPC before so I had
  interrupted BackupPC early in the backup several times before (and I
  believe the cygwin directory is placed early in the backup based on
  alphabetical order). 
  
  Also, after interrupting a very partial backup, I noticed that
  BackupPC_link was running. So perhaps, this is 'undoing' a
  BackupPC_link operation that ran on a partial backup?
  
  In any case, I am curious to know what causes the error and what does
  it mean?
  Is it an error on my system (in which case maybe I should be looking
  at my system) or is it an error in backuppc?
  
  
  Note from the below quoted thread from 2005, Craig claims that the error is
  benign, but doesn't explain how/why.

Well I just upgraded and reinstalled Fedora on  my Linux server and
ran a new full backup. Again I noticed dozens of these type
errors. All of them appear to be symbolic links but the links both in
the new full and in the previous full appear to be intact. Also, this
occurred on only about 40 out of many hundreds of symbolic links on my
system.

I am curious about what could be causing this situation that seems to
be:
1. Limited to symlinks
2. Seems to only occur after a change to the system (presumably rsync
   is seeing the same link with a different inode)
3. Only occurs on some links.

Any thoughts?


  
  
  Thanks
  
  
  
   Brendan Simon writes: Sun, 13 Nov 2005 09:13:01 -0800
   
Could someone tell me what the following errors mean?

Unexpected call 
BackupPC::Xfer::RsyncFileIO-unlink(john/aegis/CN.1.5.1.4.C117/images/CN-image.tar.gz)
[ skipped 21 lines ]
Unexpected call 
BackupPC::Xfer::RsyncFileIO-unlink(john/aegis/CN.1.5.1.4.C117/src/ethernetd/main.c)
[ skipped 45 lines ]
Unexpected call 
BackupPC::Xfer::RsyncFileIO-unlink(john/aegis/CN.1.5.1.4.C117/src/ethernetd/timer.c)
[ skipped 41134 lines ]
   
   The error itself is benign.  But for some reason BackupPC thinks
   that the existing file (ie: the one backed up in the previous full
   backup) is not a regular file.  The file will be re-transferred.
   
   What happens when you browse the previous full backup and look
   at the file type of those three files?
   
   Craig
   
  
  --
  Come build with us! The BlackBerry(R) Developer Conference in SF, CA
  is the only developer event you need to attend this year. Jumpstart your
  developing skills, take BlackBerry mobile applications to market and stay 
  ahead of the curve. Join us from November 9 - 12, 2009. Register now!
  http://p.sf.net/sfu/devconference
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
  

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow link options

2009-12-18 Thread Kameleon
I was able to get a full backup on 3 of the 4 servers. Two of them I was
able to take the backuppc machine to the location and the third it was small
enough to be able to complete over the T1. However we have one remaining
remote location that has approximately 309GB of data that needs backing up.
This initial full will take anywhere in the range of 10-20 DAYS according to
my numbers.

However, we do have an rsync of that server on another in-house server. It
is a complete rsync minus a few directories like /proc /var etc. So we don't
have to take the backuppc machine 3 hours away, does anyone know if it would
be possible to somehow setup backuppc to use the complete existing in-house
rsync as the base for the initial full backup?

The setup on the rsync backup is that the entire backup is stored in
/server/servername. Since backuppc stores the files in relation to the root
directory how would I move the files from
/var/lib/backuppc/pc/rsync-server/0/f%2fserver%2fremoteserver to
/var/lib/backuppc/pc/remoteserver/0/f%2f? Should it be as simple as moving
the folders? Or is this even possible?



On Wed, Dec 16, 2009 at 3:19 PM, Chris Robertson crobert...@gci.net wrote:

 Kameleon wrote:
  I have a few remote sites I am wanting to backup using backuppc.
  However, two are on slow DSL connections and the other 2 are on T1's.
  I did some math and roughly figured that the DSL connections, having a
  256k upload, could do approximately 108MB/hour of transfer. With these
  clients having around 65GB each that would take FOREVER!!!
 
  I am able to take the backuppc server to 2 of the remote locations
  (the DSL ones) and put it on the LAN with the server to be backed up
  to get the initial full backup. What I am wondering is this: What do
  others do with slow links like this? I need a full backup at least
  weekly and incrimentals nightly. Is there an easy way around this?

 The feasibility of this depends entirely on the rate of change of the
 backup data.  Once you get the initial full, rsync backups only transfer
 changes.  Have a look at the documentation
 (http://backuppc.sourceforge.net/faq/BackupPC.html#backup_basics) for
 more details.

 
  Thanks in advance.

 Chris



 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Matching files against the pool remotely.

2009-12-18 Thread Shawn Perry
Take a look at how Unison does it's compares.

On Fri, Dec 18, 2009 at 9:12 AM, Les Mikesell lesmikes...@gmail.com wrote:
 Malik Recoing. wrote:

 I know a file will be skiped if it is present in the previous backup, but
 what
 appens if the file have been backed up for another host ?
 It is required to be uploaded first as otherwise there's nothing to
 compare it to (yeah, I know, that's a pain[1]).

 It might theoretically be sufficient to let the remote side calculate a
 hash and compare it against the files in the pool with matching hashes,
 and then let rsync do full compares against all the matching hashes in the
 pool (since hash collisions happen), but I don't believe anyone has tried
 to code this up yet, and it would only be of limited uses in systems that
 were network bandwidth constrained rather than disk bandwidth constrained.

 I'm quite sure it will be an improvement for both. Globaly there will be no
 overhead. More : the hash calculation will be kind of clustered delegating 
 it
 to the client. The matching of identical hash is anyway done by 
 BackupPC_Link.
 Thus BackupPC_Link will became pointless in a rsync-only configuration. The
 disk and the network trafic will be reduced as many files won't be 
 transfered at
 all.

 There are two problems: one is that the remote agent is a standard rsync
 binary that knows nothing about backuppc's hashes; the other is that
 hash collisions are normal and expected - and disambiguated by a full
 data comparison.

 I tougth of a similar solution. When your client are mostly full system 
 tree
 backups, you may have ready-to-copy backups of the differents OS tree. When a
 new client is added, you copy the corresponding OS directory as it was the 
 first
 full backup.

 Yes, if your remote machines are essentially clones of each other, you
 could create their pc directories as clones with a tool that knows how
 to make a tree of hardlinks.

 A better solution might be to have a local machine at the site running
 backuppc and work out some way to get an offsite copy.  If bandwidth is
 such an issue, you are also going to have trouble doing a restore.  But,
 if you've followed this mail list very long you'd know that the 'offsite
 copy' problem doesn't have a good solution yet either.

 --
   Les Mikesell
    lesmikes...@gmail.com



 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Combine multiple Backuppc servers into one

2009-12-18 Thread Kameleon
To simplify what I am trying to accomplish I will explain it this way:

We currently have 2 backuppc servers. Both have 2x 1TB drives in a Raid1
array. What I want to do is move all the drives into one machine and set it
up as a Raid5. That would give us 3TB usable rather than 2TB usable. Hence
why I need to move everything to one setup.

Thanks for any guidance.

On Thu, Dec 17, 2009 at 7:35 PM, Kameleon kameleo...@gmail.com wrote:

 Thanks for that idea but that is not an option. I need to combine both
 backuppc machines into one physical backuppc machine. Both servers have 2
 1TB drives in a raid1 if that matters.



 On Thu, Dec 17, 2009 at 6:52 PM, Shawn Perry redmo...@comcast.net wrote:

 You can use a virtual machine for each (I am using openvz via Proxmox
 with my backuppc, and it works perfectly).

 On Thu, Dec 17, 2009 at 2:35 PM, Kameleon kameleo...@gmail.com wrote:
  I have multiple backuppc servers that I would like to combine into one
  physical machine. Each of them have different clients they were backing
 up.
  But in an effort to save power and heat, we are trying to consolidate
  machines. Is there an easy way to combine multiple backuppc machines
 into
  one existing one?
 
 
 --
  This SF.Net email is sponsored by the Verizon Developer Community
  Take advantage of Verizon's best-in-class app development support
  A streamlined, 14 day to market process makes app distribution fast and
 easy
  Join now and get one step closer to millions of Verizon customers
  http://p.sf.net/sfu/verizon-dev2dev
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
 
 


 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Combine multiple Backuppc servers into one

2009-12-18 Thread Les Mikesell
Kameleon wrote:
 To simplify what I am trying to accomplish I will explain it this way:
 
 We currently have 2 backuppc servers. Both have 2x 1TB drives in a Raid1 
 array. What I want to do is move all the drives into one machine and set 
 it up as a Raid5. That would give us 3TB usable rather than 2TB usable. 
 Hence why I need to move everything to one setup.
 
 Thanks for any guidance.

There's no good way to merge existing pooled files if that is what you 
are asking.  Or to convert a Raid1 to a Raid5 without losing the 
contents. I'd recommend building a new setup the way you want and 
holding on to the old systems for as long as you might have a need to 
restore from their older history, or perhaps generating tar images that 
you can store elsewhere with BackupPC_tarCreate.  Once the new system 
has collected the history you need you can re-use the old drives.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Combine multiple Backuppc servers into one

2009-12-18 Thread Kameleon
I was afraid of that. Thanks for the reply. It may make better sense to have
multiple servers for a bit anyways. Hopefully soon we will be getting a
dedicated Dell server for this so I can just set it up to do backups and
leave the current ones in archive mode until at such a time that the data is
outdated.

Thank you very much.



On Fri, Dec 18, 2009 at 3:27 PM, Les Mikesell lesmikes...@gmail.com wrote:

 Kameleon wrote:
  To simplify what I am trying to accomplish I will explain it this way:
 
  We currently have 2 backuppc servers. Both have 2x 1TB drives in a Raid1
  array. What I want to do is move all the drives into one machine and set
  it up as a Raid5. That would give us 3TB usable rather than 2TB usable.
  Hence why I need to move everything to one setup.
 
  Thanks for any guidance.

 There's no good way to merge existing pooled files if that is what you
 are asking.  Or to convert a Raid1 to a Raid5 without losing the
 contents. I'd recommend building a new setup the way you want and
 holding on to the old systems for as long as you might have a need to
 restore from their older history, or perhaps generating tar images that
 you can store elsewhere with BackupPC_tarCreate.  Once the new system
 has collected the history you need you can re-use the old drives.

 --
   Les Mikesell
lesmikes...@gmail.com


 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unexpected call BackupPC::Xfer::RsyncFileIO-unlink(...)

2009-12-18 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-12-18 15:36:48 -0500 [Re: [BackupPC-users] 
Unexpected call?BackupPC::Xfer::RsyncFileIO-unlink(...)]:
 Jeffrey J. Kosowsky wrote at about 13:11:37 -0500 on Monday, November 2, 2009:
   Unexpected call 
 BackupPC::Xfer::RsyncFileIO-unlink(cygwin/usr/share/man/man3/addnwstr.3x.gz)
   
   [...]
   
   Note from the below quoted thread from 2005, Craig claims that the error is
   benign, but doesn't explain how/why.
 
 [...]
 
 I am curious about what could be causing this situation [...]

if you're curious about what is causing a benign warning message, you're
probably on your own for the most part. I can supply you with one casual
observation and one tip:

I think I saw that warning when I changed from tar to rsync XferMethod. As you
know, tar and rsync encode the file type plain file differently in attrib
files (rsync has a bit for it, tar (like stat()) doesn't and simply takes the
absense of a bit for a special file type to mean plain file). When rsync
compares remote and local file type (remote from remote rsync instance, local
from attrib file generated by tar XferMethod), it assumes a plain file changed
its type, so it removes the local copy (if that sounds strange, remember that
File::RsyncP mimics plain rsync, which *would* delete the local file; with
BackupPC's storage backend, that doesn't make sense, hence the warning) and
transfers the remote file without a local copy to compare to. Or something
like that.

If you want to know more, look at where the source code generates the warning
message (well, that's stated *in* the warning message) and where that code is
called from (presumably File::RsyncP) and in which circumstances.

Good luck. Since you asked, don't forget to report back ;-).

Regards,
Holger

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/