[BackupPC-users] removing files from backups

2013-08-06 Thread Kameleon
I have been fighting with the proper syntax of excluding files and folders.
Now that I have it working properly I need to go in to the backups and
remove the files I was trying to exclude. Expiring the backups just for
these few folders/files is not worth it. Is there an easy(ish) way to do
this via the command line?
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with 2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude directories

2013-06-07 Thread Kameleon
Sorry I didn't mention the xfer method. All servers use rsync over ssh. I
plan to try the above this weekend with the full runs.


On Fri, Jun 7, 2013 at 1:24 PM, Holger Parplies wb...@parplies.de wrote:

 Hi,

 Michael Stowe wrote on 2013-05-31 08:21:03 -0500 [Re: [BackupPC-users]
 Exclude directories]:
   On each of our samba servers inside of each share is a .deleted folder
   that
   all of the files that a user deletes from the share within windows
 goes to
   instead of actually being deleted immediately. I do not want to back
 these
   up but they are not all in the same path on all the servers. What is
 the
   correct syntax to exclude these from the backups? Should */.deleted
 work?
   Or will I need to explicitly declare all the paths?
 
  $Conf{BackupFilesExclude} = {
'*' = [
  '*/.deleted'
  '*/.deleted/*'
]
  };

 strictly, this depends on the XferMethod (which the OP did not mention),
 but
 the above looks as though it should mostly work. For rsync(d), the '*/' in
 the
 patterns is meaningless except for preventing '.deleted' at the top level
 within the share to match. Probably the same holds for tar, but I didn't
 check. As for smb, there always seems to be confusion whether in-/excludes
 need to contain slashes or backslashes. My memory seems to say
 backslashes,
 but I haven't ever used smb XferMethod myself.

 In any case, it should be possible to use wildcards and *not* list all
 paths.
 Again, the syntax of in-/excludes depends on the XferMethod used.

 Regards,
 Holger


 --
 How ServiceNow helps IT people transform IT departments:
 1. A cloud service to automate IT design, transition and operations
 2. Dashboards that offer high-level views of enterprise services
 3. A single system of record for all IT processes
 http://p.sf.net/sfu/servicenow-d2d-j
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude directories

2013-06-07 Thread Kameleon
Here is the entry in config.pl after adding it to the web interface. Is
this correct?

$Conf{BackupFilesExclude} = {
  '*' = [
'/sys',
'/proc',
'/dev',
'/tmp',
'/server',
'/mnt',
'*.tmp',
'.deleted',
'.deleted/'
  ]
};


On Fri, Jun 7, 2013 at 3:38 PM, Kameleon kameleo...@gmail.com wrote:

 Sorry I didn't mention the xfer method. All servers use rsync over ssh. I
 plan to try the above this weekend with the full runs.


 On Fri, Jun 7, 2013 at 1:24 PM, Holger Parplies wb...@parplies.de wrote:

 Hi,

 Michael Stowe wrote on 2013-05-31 08:21:03 -0500 [Re: [BackupPC-users]
 Exclude directories]:
   On each of our samba servers inside of each share is a .deleted folder
   that
   all of the files that a user deletes from the share within windows
 goes to
   instead of actually being deleted immediately. I do not want to back
 these
   up but they are not all in the same path on all the servers. What is
 the
   correct syntax to exclude these from the backups? Should */.deleted
 work?
   Or will I need to explicitly declare all the paths?
 
  $Conf{BackupFilesExclude} = {
'*' = [
  '*/.deleted'
  '*/.deleted/*'
]
  };

 strictly, this depends on the XferMethod (which the OP did not mention),
 but
 the above looks as though it should mostly work. For rsync(d), the '*/'
 in the
 patterns is meaningless except for preventing '.deleted' at the top level
 within the share to match. Probably the same holds for tar, but I didn't
 check. As for smb, there always seems to be confusion whether in-/excludes
 need to contain slashes or backslashes. My memory seems to say
 backslashes,
 but I haven't ever used smb XferMethod myself.

 In any case, it should be possible to use wildcards and *not* list all
 paths.
 Again, the syntax of in-/excludes depends on the XferMethod used.

 Regards,
 Holger


 --
 How ServiceNow helps IT people transform IT departments:
 1. A cloud service to automate IT design, transition and operations
 2. Dashboards that offer high-level views of enterprise services
 3. A single system of record for all IT processes
 http://p.sf.net/sfu/servicenow-d2d-j
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Exclude directories

2013-05-30 Thread Kameleon
On each of our samba servers inside of each share is a .deleted folder that
all of the files that a user deletes from the share within windows goes to
instead of actually being deleted immediately. I do not want to back these
up but they are not all in the same path on all the servers. What is the
correct syntax to exclude these from the backups? Should */.deleted work?
Or will I need to explicitly declare all the paths?
--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with 2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Windows servers

2013-02-25 Thread Kameleon
What is the best way to back up windows 2008r2 servers with backuppc?
I have been looking at http://www.michaelstowe.com/backuppc/ but it
loses me at the requirement for winexe which looks to need samba4. We
are on a samba3 domain and cannot upgrade for a while. Is there a good
step by step anywhere that can walk me through setting up a windows
2008r2 server to be backed up by backuppc?

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Windows servers

2013-02-25 Thread Kameleon
I guess I am just not sure how to even begin to build it since there
is no documentation I can find.

On Mon, Feb 25, 2013 at 2:55 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On Mon, Feb 25, 2013 at 1:59 PM, Kameleon kameleo...@gmail.com wrote:
 What is the best way to back up windows 2008r2 servers with backuppc?
 I have been looking at http://www.michaelstowe.com/backuppc/ but it
 loses me at the requirement for winexe which looks to need samba4. We
 are on a samba3 domain and cannot upgrade for a while. Is there a good
 step by step anywhere that can walk me through setting up a windows
 2008r2 server to be backed up by backuppc?


 It has been a while since I set up winexe but I think the source has
 everything you need to build it and you should be able run the
 resulting binary even if you only have samba3 installed.

 --
Les Mikesell
  lesmikes...@gmail.com

 --
 Everyone hates slow websites. So do we.
 Make your web apps faster with AppDynamics
 Download AppDynamics Lite for free today:
 http://p.sf.net/sfu/appdyn_d2d_feb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Windows servers

2013-02-25 Thread Kameleon
Thanks for the confirmation. I had tried building from the source to no
avail. I will try the report in the morning.
On Feb 25, 2013 8:37 PM, Kris Lou k...@themusiclink.net wrote:

 ^^  I can confirm that the above repository works with CentOS 6 (and
 Michael's implementation).

 Kris Lou
 k...@themusiclink.net


 On Mon, Feb 25, 2013 at 3:19 PM, Les Mikesell lesmikes...@gmail.comwrote:

 On Mon, Feb 25, 2013 at 4:18 PM, Kameleon kameleo...@gmail.com wrote:
  I guess I am just not sure how to even begin to build it since there
  is no documentation I can find.

 There are an assortment of packages here:

 https://build.opensuse.org/package/repositories?package=winexeproject=home%3Aahajda%3Awinexe

 Or starting with the code here:
 http://sourceforge.net/projects/winexe/files/
 cd into the source4 directory and run configure, then make.  I think
 you get some errors and not everything builds, but if you get a binary
 in the winexe directory it is all you need.

 --
Les Mikesell
 lesmikes...@gmail.com


 --
 Everyone hates slow websites. So do we.
 Make your web apps faster with AppDynamics
 Download AppDynamics Lite for free today:
 http://p.sf.net/sfu/appdyn_d2d_feb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




 --
 Everyone hates slow websites. So do we.
 Make your web apps faster with AppDynamics
 Download AppDynamics Lite for free today:
 http://p.sf.net/sfu/appdyn_d2d_feb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Proper way to remove all but most recent backups

2012-09-24 Thread Kameleon
I am trying to figure out the proper way to remove all but the most recent
backups on our system. I could just:

cd /var/lib/backuppc/pc/hostname
rm -rf XX (for each old backup)
/usr/share/backuppc/bin/Backuppc_nightly 0 255 (to actually remove the
files from the pool)

But is there another way?
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] pulling initial full from a dump

2012-08-30 Thread Kameleon
I have setup our new backuppc server. However I having a problem pulling
the initial full from two of our remote sites. Each site is on a T1 and has
around 500GB it needs to grab. The backup times out before it can finish.
Before I have had to get an external usb drive, take it to the site, do an
rsync manually, and bring it back to the office to have backuppc grab it.
However I have a good full from our old backuppc server that I can dump
onto another physical machine here. I did this but when I pulled the full
it mounted it as /home/servername instead of just / so when the new server
tried to pull the incremental it was trying to pull from / meaning it
didn't match any files and started pulling them all again. I cannot
remember for the life of me how I did it before to get it to work properly.
Any pointers?
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pulling initial full from a dump

2012-08-30 Thread Kameleon
Perfect. Worked like a charm. Thanks for that pointer.

On Thu, Aug 30, 2012 at 9:39 AM, Adam Goryachev 
mailingli...@websitemanagers.com.au wrote:

 On 31/08/12 00:21, Kameleon wrote:
  I have setup our new backuppc server. However I having a problem
  pulling the initial full from two of our remote sites. Each site is on
  a T1 and has around 500GB it needs to grab. The backup times out
  before it can finish. Before I have had to get an external usb drive,
  take it to the site, do an rsync manually, and bring it back to the
  office to have backuppc grab it. However I have a good full from our
  old backuppc server that I can dump onto another physical machine
  here. I did this but when I pulled the full it mounted it as
  /home/servername instead of just / so when the new server tried to
  pull the incremental it was trying to pull from / meaning it didn't
  match any files and started pulling them all again. I cannot remember
  for the life of me how I did it before to get it to work properly. Any
  pointers?

 Don't do this in real life, there is no warranty etc

 However, it once may have worked for me :)

 Get to your initial full backup
 cd /var/lib/backuppc/pc/hostname/0
 mv f%2fhome%2fservername f%2f

 This just changes the directory to what backuppc expects.

 If you want to minimise issues, I would suggest you do a full backup,
 and then purge/retire backup 0, that way your first backup is properly
 done by backuppc.

 Regards,
 Adam

 --
 Adam Goryachev
 Website Managers
 www.websitemanagers.com.au



 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Centos 6.3 install

2012-08-10 Thread Kameleon
Thanks Les. That will give me the latest version of backuppc correct? Also
all the configs and such should be in the default places like /etc/backuppc
and /usr/share/backuppc right? I do mount my pool to /var/lib/backuppc so I
am good there.

On Thu, Aug 9, 2012 at 5:33 PM, Les Mikesell lesmikes...@gmail.com wrote:

 On Thu, Aug 9, 2012 at 4:50 PM, Kameleon kameleo...@gmail.com wrote:
  I am trying to find the best way to install backuppc on my centos 6.3
 box.
  Everything I read is for Centos 5 and older so alot does not apply to the
  new version. Anyone have any pointers or good sites? I tried just
 following
  the readme in the tarball but backuppc won't start. Our old server is
 ubuntu
  10.04 so that is no help either.

 Add the EPEL repository to your yum config if you haven't already and:
 yum install backuppc

 It will save some trouble if you mount the archive partition you want
 to use at /var/lib/BackupPC before the install.
 Then follow the instructions in /etc/httpd/conf.d/BackupPC.conf to add
 your web password.

 --
   Les Mikesell
 lesmikes...@gmail.com


 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Centos 6.3 install

2012-08-09 Thread Kameleon
I am trying to find the best way to install backuppc on my centos 6.3 box.
Everything I read is for Centos 5 and older so alot does not apply to the
new version. Anyone have any pointers or good sites? I tried just following
the readme in the tarball but backuppc won't start. Our old server is
ubuntu 10.04 so that is no help either.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another moving backuppc question

2012-06-22 Thread Kameleon
I forgot to explain that the current VM has a 6TB logical volume. I
would like to move that data to our iSCSI SAN which I have carved out
a 8TB logical volume on. The problem comes when I try to migrate the
lv via dd or other means as the host system takes a dump. Does it
sound like the only option is the rsync -vrlptgoDH command? That is
what I used to get the data from the original machine to the vm.

On Thu, Jun 21, 2012 at 1:12 PM, Kameleon kameleo...@gmail.com wrote:
 Currently our backuppc server is a virtual server. The root partition
 is separate from the data store. All of its storage is on the local
 virtual server host. It has a 6TB partition with about 85%
 utilization. I am wanting to move the data to our iscsi SAN. Every
 time we have tried moving the lvm of the data store the host machine
 freezes. So I am pretty much left with only a rsync or similar way to
 move stuff. Is there any other ways I have missed? Should the built in
 backuppc_tar way of moving stuff be more efficient or better?

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another moving backuppc question

2012-06-22 Thread Kameleon
Thanks for the reply Les. Unfortunately setting up a new backuppc
and leaving the old is not an option in our case. Here is our
situation:

When we initially ordered this server we ordered it with 32GB ram,
dual quad-core xeon's, and 8x 2TB drives. We had to pass it off as a
backup server to get the funding through. So as soon as we got it, I
setup Xen on the physical host and then got to carving out LV's for
virtual machines. One of which was the 6TB for the backuppc store. Now
here we are a few years later and we are wanting to remove the disks
from this host and swap them with another physical server that has 8x
750GB drives. This is so that we can move the backuppc server onto the
physical host and to a different building away from the hosts it is
backing up. So I have to move everything off this physical host,
including the 6TB backuppc data store. I am trying to move everything
I can to the SAN. However my problem with moving stuff is compounded
by the version of CentOS we are running having issues with the hosts
iscsi offload driver. So to get all the other servers off this host we
are recreating them on a separate host and manually rsync'ing the
configs and such over to the new hosts. This still leaves the backuppc
data. I don't have a problem doing the rsync way if it is the only
reasonable option. However I wanted to run it by you guys to make sure
I wasn't missing something obvious.

On Fri, Jun 22, 2012 at 1:14 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On Fri, Jun 22, 2012 at 12:52 PM, Kameleon kameleo...@gmail.com wrote:
 I forgot to explain that the current VM has a 6TB logical volume. I
 would like to move that data to our iSCSI SAN which I have carved out
 a 8TB logical volume on. The problem comes when I try to migrate the
 lv via dd or other means as the host system takes a dump. Does it
 sound like the only option is the rsync -vrlptgoDH command? That is
 what I used to get the data from the original machine to the vm.

 I don't think there is really a good answer to this problem.  You
 might ask on the LVM  list if you could reasonably expect to attach
 the iscsi volume as a new physical volume and pmove to it.   I don't
 have any experience with that.  Dd should work if you have unmounted
 everything so it doesn't change during the copy - if that fails I'd be
 concerned about the robustness of the iscsi connection.    Partclone
 might be able to do it without copying the unused space but I don't
 know how it relates to lvm (but I think clonezilla uses it somehow).

 Rsync with the -H option or other file-oriented copying approaches
 will take an extremely long time to reconstruct the hardlinks and has
 to copy the entire tree in one run to get it right.   Using rsync on
 the pool or cpool dir, then the BackupPC_tarPCCopy tool for the pc
 directory is supposed to be faster, but it will still take a long time
 and the system has to be down for the rsync and tarPCCopy runs to
 complete.

 I've usually just kept the old system around for emergency restores
 while the new replacement collected its own history.

 --
   Les Mikesell
      lesmikes...@gmail.com

 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another moving backuppc question

2012-06-22 Thread Kameleon
On Fri, Jun 22, 2012 at 2:01 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On Fri, Jun 22, 2012 at 1:37 PM, Kameleon kameleo...@gmail.com wrote:
 Thanks for the reply Les. Unfortunately setting up a new backuppc
 and leaving the old is not an option in our case. Here is our
 situation:

 When we initially ordered this server we ordered it with 32GB ram,
 dual quad-core xeon's, and 8x 2TB drives. We had to pass it off as a
 backup server to get the funding through. So as soon as we got it, I
 setup Xen on the physical host and then got to carving out LV's for
 virtual machines. One of which was the 6TB for the backuppc store. Now
 here we are a few years later and we are wanting to remove the disks
 from this host and swap them with another physical server that has 8x
 750GB drives. This is so that we can move the backuppc server onto the
 physical host and to a different building away from the hosts it is
 backing up. So I have to move everything off this physical host,
 including the 6TB backuppc data store. I am trying to move everything
 I can to the SAN. However my problem with moving stuff is compounded
 by the version of CentOS we are running having issues with the hosts
 iscsi offload driver. So to get all the other servers off this host we
 are recreating them on a separate host and manually rsync'ing the
 configs and such over to the new hosts. This still leaves the backuppc
 data. I don't have a problem doing the rsync way if it is the only
 reasonable option. However I wanted to run it by you guys to make sure
 I wasn't missing something obvious.

 The 'rsync pool/cpool, follow with BackupPC_tarPCCopy' is probably the
 fastest approach, but you are still going to have a long downtime (you
 can't let anything change during the whole operation).  I'd say the
 effort is misguided considering the price of new 2 TB drives these
 days, and I'd want to be really sure about that iscsi issue before
 losing the copy on the local drives.   If you are running an older
 CentOS, maybe you could add a new VM and mount your old partition into
 it to see if that helps.


We just had to buy 2 backup drives and they were over $1000. This is a
Dell server, not some basic desktop drive. Otherwise I woudl have to
fully agree with you and say we need to just buy new drives. We also
are in a budgetary constraint. Hence the need to move the drives
instead of purchase new ones.

 Depending on how much you expect to need the old history, you might
 just make a few tar-image snapshots from different points in time with
 BackuPC_tarCreate.   If you need more than a few, they might take more
 overall space but you can be selective about what you save and put it
 on different media.  With that approach you could let your new system
 start running and overlap that with extracting what you want from the
 old one.


I will consider that too. Thanks!

 --
   Les Mikesell
     lesmikes...@gmail.com

 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another moving backuppc question

2012-06-22 Thread Kameleon
On Fri, Jun 22, 2012 at 2:35 PM, Timothy J Massey tmas...@obscorp.comwrote:

 Kameleon kameleo...@gmail.com wrote on 06/22/2012 02:37:17 PM:


  Thanks for the reply Les. Unfortunately setting up a new backuppc
  and leaving the old is not an option in our case. Here is our
  situation:
 
  When we initially ordered this server we ordered it with 32GB ram,
  dual quad-core xeon's, and 8x 2TB drives. We had to pass it off as a
  backup server to get the funding through. So as soon as we got it, I
  setup Xen on the physical host and then got to carving out LV's for
  virtual machines. One of which was the 6TB for the backuppc store. Now
  here we are a few years later and we are wanting to remove the disks
  from this host and swap them with another physical server that has 8x
  750GB drives. This is so that we can move the backuppc server onto the
  physical host and to a different building away from the hosts it is
  backing up. So I have to move everything off this physical host,
  including the 6TB backuppc data store. I am trying to move everything
  I can to the SAN. However my problem with moving stuff is compounded
  by the version of CentOS we are running having issues with the hosts
  iscsi offload driver. So to get all the other servers off this host we
  are recreating them on a separate host and manually rsync'ing the
  configs and such over to the new hosts. This still leaves the backuppc
  data. I don't have a problem doing the rsync way if it is the only
  reasonable option. However I wanted to run it by you guys to make sure
  I wasn't missing something obvious.


 Why can't you copy (at the block level) the VM image onto the SAN?  This
 should be able to be done at the *HOST* level, making whatever OS you're
 using inside of BackupPC completely meaningless.

 That's the beauty of VM's:  their hard drives are simply files (or
 partitions) on the host, and can easily be moved around anywhere you wish.
  At the HOST level--not the GUEST level.

 If you have issues that prevent you from copying the BackupPC VM image
 from the VM *host*, your problem is completely unrelated to BackupPC and
 you should probably ask somewhere else for more specific help.


 Of course, there are a ton of other ways of solving this:  booting a
 Clonezilla live CD within the BackupPC guest and using Clonezilla to copy
 the data to your SAN;  unmount your BacukpPC partition and using netcat to
 copy it to another hose;  purchasing (or temporarily building) a baby NAS
 to copy your 6TB of data onto from the host instead of directly into the
 SAN and avoid whatever driver issues you have; and I'm sure even more ways.
  Take some time and think outside of the box (see my signature!)--and make
 sure that your objections to each of these ways *really* are objections,
 and not you eliminating every way but the one way you currently envision.

 Tim Massey

*Out of the Box Solutions, Inc.* *
 Creative IT Solutions Made Simple!**
 **http://www.OutOfTheBoxSolutions.com*http://www.outoftheboxsolutions.com/
 *
 **tmas...@obscorp.com* tmas...@obscorp.com   22108 Harper Ave.
 St. Clair Shores, MI 48080
 Office: (800)750-4OBS (4627)
 Cell: (586)945-8796


 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


Thanks for that Tim. The host OS is CentOS, the backuppc OS is ubuntu. The
problem lies in the host OS's support, or broken support, for the iscsi
offload function. I have tried moving the data with dd and all was fine for
about an hour then the host machine just took a dump and became unusable. I
had thought about using clonezilla but again to get it on the SAN I would
have to rely on the hosts iSCSI support. My only other option that I can
see is to stop the backuppc service, and use rsync/netcat/etc to get the
data off the vm and on to the SAN. This would only use the networking and
not the iscsi offload as I would just have a separate host connected to the
SAN and have it writing the data from the current backuppc data store. I
want to find another solution, a faster one if possible, but I think I am
limited by the not being able to use iSCSI issue.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions

Re: [BackupPC-users] Yet another moving backuppc question

2012-06-22 Thread Kameleon
On Fri, Jun 22, 2012 at 2:55 PM, Timothy J Massey tmas...@obscorp.comwrote:

 Kameleon kameleo...@gmail.com wrote on 06/22/2012 03:41:56 PM:


  We just had to buy 2 backup drives and they were over $1000. This is a
  Dell server, not some basic desktop drive. Otherwise I woudl have to
  fully agree with you and say we need to just buy new drives. We also
  are in a budgetary constraint. Hence the need to move the drives
  instead of purchase new ones.


 I have *NEVER* understood the logic of putting BackupPC into your SAN.
  First of all, your SAN contains your production data, so why do you want
 to store the backups there?  Second of all, it uses (I assume, given the
 cost) 15k RPM SAS drives, which are *crazy* fast and *crazy* expensive.
  You can fix both of those by throwing a bunch of 7200 RPM SATA drives into
 a reasonably low-end PC for less than the price of those two SAS drives.

 Backup is about bulk storage with reasonable (near-line) performance and
 reliability.  SAN is all about maximum performance and maximum reliability.
  Their strengths do not mesh well.

 Besides, it's backup.  If you're using it, something has already gone
 pretty far wrong.  Do you really want your backup server to be several
 layers away from the underlying data?  I don't.  What if the thing that
 caused you to need the backup was an issue in the virtualization
 system--the same thing that your backups are depending on!

 In the case of online data, multiple copies is not practical.  In the case
 of backups, multiple copies are perfectly fine.  You can achieve multiple
 copies by using the archive feature, or even easier by having two BackupPC
 servers!

 I'm sure you've got a dozen objections why you just *can't* put BackupPC
 on separate, inexpensive hardware.  But you've very effectively painted
 yourself into an awkward corner.  We don't have a rabbit to pull out of our
 hat.  Speaking for myself, I think that you underlying assumptions and
 basic decisions have more to do with the awkward spot you're in than does
 the technology.

 Tim Massey

*Out of the Box Solutions, Inc.* *
 Creative IT Solutions Made Simple!**
 **http://www.OutOfTheBoxSolutions.com*http://www.outoftheboxsolutions.com/
 *
 **tmas...@obscorp.com* tmas...@obscorp.com   22108 Harper Ave.
 St. Clair Shores, MI 48080
 Office: (800)750-4OBS (4627)
 Cell: (586)945-8796


 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

 Tim,

 I see where you are getting at. However I am merely only trying to get
the data onto the SAN as an intermediary step. Once I get all the data off
the physical host I can then pull the drives and put them in the other
physical host where backuppc will be staying. Then comes the job of moving
all the data once more, this time from the SAN to the physical host. So
yes, the backup server will be down for a bit but hopefully this will be
the last time we have to do anything like this, at least for a while. The
drives are near-line sas/sata 7200rpm 6GB/sec 2TB drives straight from
Dell. As for putting it on different hardware that costs less than 2
drives, I would love to. My hands are tied though as our supervisor insists
on only using server grade hardware. I too have had this discussion, and
others very similar, with her multiple times. However she is the one that
has to make the final decision and she is the one that gets the PO's
approved or not. I have even shown her how I was able to build a storage
server at home for around a grand (intel core2duo, 6GB ram, 9x 1TB drives).
I then promptly got the that's not on the EPL talk. We are a government
agency and can only purchase what they tell us we can. Yet another great
example of the inefficiency of government. I do what I can when I can but
this is not one of those times.

This agency has had alot of decisions made along the way that I don't agree
with and I am trying to fix them as best I can. Case in point is exactly
what is causing us to need to move the data in the first place. Why have
your backup server on the same physical host of the servers you are backing
up? Craziness I tell you! So that is why I am trying to get the data off
and back on to a physical server that we can relocate on the opposite side
of the campus. I am not objecting to any pointers or suggestions that
anyone has given on my own accord, I am painted into an awkward

[BackupPC-users] Multi-tenancy for web interface possible

2012-06-21 Thread Kameleon
We have one central backuppc server that we have been using for some
years now. In our agency we actually have another IT group that
handles a small subset of users. That groups backup server has crashed
and we are looking at adding them to our backuppc server. However we
do not want them to have access to our hosts and they need their own
login. We would just setup another backuppc server for them but we
want to utilize the de-duplication characteristics of backuppc to the
maximum, hence sharing our server with them. Is this possible?

Donny B.

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multi-tenancy for web interface possible

2012-06-21 Thread Kameleon
That is exactly what I was looking for. Thanks for the fast response.
It has been forever since I have setup backuppc but on our current
servers we use user root but login to the backuppc web interface
with user backuppc. This user is able to see every host we have. So I
need to do some digging and remember how we setup authentication. But
it should be as easy as adding another user to whatever mechanism we
used and putting user root More users newusername on their hosts
corect?

On Thu, Jun 21, 2012 at 11:33 AM, Chris Stone axi...@gmail.com wrote:
 Donny,


 On Thu, Jun 21, 2012 at 10:24 AM, Kameleon kameleo...@gmail.com wrote:

 We have one central backuppc server that we have been using for some
 years now. In our agency we actually have another IT group that
 handles a small subset of users. That groups backup server has crashed
 and we are looking at adding them to our backuppc server. However we
 do not want them to have access to our hosts and they need their own
 login. We would just setup another backuppc server for them but we
 want to utilize the de-duplication characteristics of backuppc to the
 maximum, hence sharing our server with them. Is this possible?


 Sure you can - we do that here on 3 backup servers we run with BackupPC.
 Just add their login user names to the BackupPC hosts file for the hosts
 that login should have access to. For example, for the example hosts file:

 host    dhcp    user    moreUsers # --- do not edit
 this line
 farside    0   craig   jill,jeff # --- example static
 IP host entry
 farside2  0   craig

 In this case, craig, jill and jeff would have access to the computer
 farside. No other users would have access to farside - nor would they even
 see that it's setup on the server. For farside2, only craig would have
 access - jill and jeff would not.



 Chris


 --
 Chris Stone
 AxisInternet, Inc.
 www.axint.net

 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multi-tenancy for web interface possible

2012-06-21 Thread Kameleon
Well I think we never used any of the authentication and it didn't
setup a .htaccess anywhere. Since we are in the midst of moving the
server and going from ubuntu to centos I will be sure to fully
configure it correctly this time. Thanks again.

On Thu, Jun 21, 2012 at 12:15 PM, Chris Stone axi...@gmail.com wrote:
 Donny,


 On Thu, Jun 21, 2012 at 10:59 AM, Kameleon kameleo...@gmail.com wrote:

 with user backuppc. This user is able to see every host we have. So I
 need to do some digging and remember how we setup authentication. But
 it should be as easy as adding another user to whatever mechanism we
 used and putting user root More users newusername on their hosts
 corect?


 The web interfaces uses http authentication and sets up (by default as I
 recall) a .htaccess file in your cgi-bin directory (e.g. /var/www/cgi-bin)
 like:

 [root@axisbackup ~]# cat /var/www/cgi-bin/.htaccess
     AuthGroupFile /etc/httpd/conf/group    # --- change path as needed
     AuthUserFile /etc/httpd/conf/passwd # --- change path as needed
     AuthType basic
     AuthName AxisBackup Access
     require valid-user

 So, with this, you'd add a new user with:

 htpasswd /etc/httpd/conf/passwd newusername

 You'll be prompted for the password and then that user (newusername) will be
 added to the /etc/httpd/conf/passwd file and will then be able to log in.
 Link them to hosts in the backuppc hosts file and you should be all set.




 Chris


 --
 Chris Stone
 AxisInternet, Inc.
 www.axint.net

 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multi-tenancy for web interface possible

2012-06-21 Thread Kameleon
Of course. :) Although the authentication was the backuppc user and
its password.

On Thu, Jun 21, 2012 at 12:59 PM, Chris Stone axi...@gmail.com wrote:

 On Thu, Jun 21, 2012 at 11:29 AM, Kameleon kameleo...@gmail.com wrote:

 Well I think we never used any of the authentication and it didn't
 setup a .htaccess anywhere. Since we are in the midst of moving the
 server and going from ubuntu to centos I will be sure to fully
 configure it correctly this time. Thanks again.


 Hope you had it firewalled! No authentication would open access to all of
 your files by anybody that wanted them



 Chris

 --
 Chris Stone
 AxisInternet, Inc.
 www.axint.net

 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Yet another moving backuppc question

2012-06-21 Thread Kameleon
Currently our backuppc server is a virtual server. The root partition
is separate from the data store. All of its storage is on the local
virtual server host. It has a 6TB partition with about 85%
utilization. I am wanting to move the data to our iscsi SAN. Every
time we have tried moving the lvm of the data store the host machine
freezes. So I am pretty much left with only a rsync or similar way to
move stuff. Is there any other ways I have missed? Should the built in
backuppc_tar way of moving stuff be more efficient or better?

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multi-tenancy for web interface possible

2012-06-21 Thread Kameleon
Yes. So now I have to rethink the way I initially set this up. I put
in root as the user on each server thinking that is what it used to
log into the server. The way the documentation reads it is only who
can administer that server and its backups but was not clear, to me,
that it didn't have to be root. Now I see that the root login to the
server is specified on the rsyncclientcommand. I will be changing
this asap. Thanks for the input guys.

On Thu, Jun 21, 2012 at 1:51 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On Thu, Jun 21, 2012 at 1:28 PM, Kameleon kameleo...@gmail.com wrote:
 Right, the other group won't be adding/deleting/etc hosts. All they
 will do is log in to do restores to their already setup hosts. What we
 want is for them to be able to login and only see their hosts and not
 ours. This is what you are saying is how it works correct?

 Yes, the admin user/group can do everything.  Other users only see
 what is delegated.  Sometimes when people say multi-tenancy they mean
 independent groups with separate and exclusive administration.   As
 long as one login or group is allowed to see everything it will do
 what you want.

 --
   Les Mikesell
     lesmikes...@gmail.com

 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Moving backuppc to a new machine

2012-04-20 Thread Kameleon
Currently our backuppc server is a Xen pv domU running Ubuntu 10.04.
It has served us well over the past two years. However it is time to
move it out of the virtual environment and back onto physical
hardware. This is only so that it can be located on the far edge of
our campus as far away from the physical servers it backs up as
possible while still keeping it on the fiber network. So with that we
are looking to install a fresh OS on the new hardware. We could stay
with Ubuntu and just load 12.04. Most of our other servers are Centos
or Fedora. Is there one distribution that is better than the other for
backuppc? I will be moving the /var/lib/backuppc pool (it is on it's
own lv) to the new machine. Should I expect any problems with this?

Thanks in advance for any and all input.

Donny B.

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Moving backuppc to a new machine

2012-04-20 Thread Kameleon
On Fri, Apr 20, 2012 at 1:28 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On Fri, Apr 20, 2012 at 12:41 PM, Kameleon kameleo...@gmail.com wrote:
 Currently our backuppc server is a Xen pv domU running Ubuntu 10.04.
 It has served us well over the past two years. However it is time to
 move it out of the virtual environment and back onto physical
 hardware. This is only so that it can be located on the far edge of
 our campus as far away from the physical servers it backs up as
 possible while still keeping it on the fiber network. So with that we
 are looking to install a fresh OS on the new hardware. We could stay
 with Ubuntu and just load 12.04. Most of our other servers are Centos
 or Fedora. Is there one distribution that is better than the other for
 backuppc? I will be moving the /var/lib/backuppc pool (it is on it's
 own lv) to the new machine. Should I expect any problems with this?

 Thanks in advance for any and all input.

 If you like centos, I'd probably go with a Centos 6.2 since that
 should have a very long life with update support.  But, there are some
 differences in the packaging and directory naming conventions between
 the EPEL rpm and the debian/ubuntu .debs that you might need to
 understand.   I don't think that should affect the archive layout,
 though.  Moving large pools is always a problem if you try to do
 file-level copies. Moving the disks or image copies should work,
 though.

 --
  Les Mikesell
    lesmikes...@gmail.com

 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

When I set this machine up I set the pool to be on a separate logical
volume just for the reason of future expansion. The move to this
machine was the good old rsync -vrlptgoDH way and it did take for
ever! That was when we only had roughly 1.5TB of pool storage, while
now we have over 5TB. It is not a possibility to leave the existing
pool where it is as we are pulling the drives to install in a
different machine meaning re-initializing them. Since the pool is on
it's own lv I plan to just export that lv to our SAN temporarily, then
export it to it's final destination once the final machine is setup.
Thanks for the input.

Donny B.

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Anyone using r1soft hotcopy as a pre/post backup command?

2010-08-31 Thread Kameleon
Ok, I tested the following and it worked. Here is what I used for others
future reference

DumpPreUserCmd: $sshPath -q -x -l root $host /usr/sbin/hcp /dev/sda1
(/dev/sda1 is my / partition so I run hcp against that)
DumpPostUserCmd: $sshPath -q -x -l root $host /usr/sbin/hcp -r /dev/hcp1
(this stops the snapshot and unmounts it in one fell swoop)
RsyncShareName: /var/hotcopy/sda1_hcp1 (or whatever your version of hcp
mounts the snapshot to)

Obviously I an using rsync to back up this machine. I have tested backups
and restores. The restore you just have to tell it to restore to / instead
of the /var/hotcopy/. path and you should be good.

I hope this helps someone in the future.

Donny B.


On Tue, Aug 31, 2010 at 11:03 AM, Kameleon kameleo...@gmail.com wrote:

 I am looking at using r1soft's hotcopy to enable a snapshot of the
 filesystem before backuppc does it's magic. Similar to how Microsoft Volume
 shadow copy works or LVM's snapshots. What I am wondering is this: Does
 anyone currently use this type of setup and if so, would you mind sharing
 your pre/post commands so I can compare to what I am thinking. Thanks in
 advance/

 Donny B.

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [OT] My mail sent from gmail are not sent back to my account

2010-08-19 Thread Kameleon
Also remember that gmail will thread your messages together and can
mark your message as read since it will be the same as the one in your
sent folder. Took me a while to realize that also.

On 8/19/10, Tyler J. Wagner ty...@tolaris.com wrote:
 On Thursday 19 Aug 2010 11:18:58 Farmol SPA wrote:
 An OT for a question: my messages sent from a gmail.com account (like
 this) are received by the mailing list (and I can see them on the web
 archive) but they are not sent back to my account as a mailing list
 message. Thus I see only answers but not my original post.

 This is a configurable per-address Mailman list setting. If you want to
 change
 it, login here:

 https://lists.sourceforge.net/lists/listinfo/backuppc-users

 On the second page, enter your password to complete login. Then change the
 option Receive your own posts to the list to Yes.

 If it is already set to yes, start looking in filters and junk email traps
 at
 your end.

 Regards,
 Tyler

 --
 No one can terrorize a whole nation, unless we are all his accomplices.
-- Edward R. Murrow

 --
 This SF.net email is sponsored by

 Make an app they can't live without
 Enter the BlackBerry Developer Challenge
 http://p.sf.net/sfu/RIM-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-- 
Sent from my mobile device

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backing up over WAN

2010-07-07 Thread Kameleon
Yes, this is what we do with 4 of our sites that are off our fiber
ring. Although 2 have the bandwidth to complete the initial backup the
other 2 don't. So we just take the backup server on site and let it
get an initial full. As Les stated, after the initial full it will
work the way you intend.

Donny B.

On 7/7/10, Les Mikesell lesmikes...@gmail.com wrote:
 Udo Rader wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 we need to setup a reasonable backup solution for one of our
 customers, consisting of a HQ office and two branches. The most
 important requirement is off-site backup. Backup volume will be
 approximately 50 GB per office for a full backup, meaning 150GB in total.

 So far, we have used backuppc for almost any backup solution, but this
 time I am not 100% sure because of the WAN side of life.

 What I am wondering is if it was possible to retrieve files from the
 clients more intelligently:

 If I understand BackupPC's pooling concept correctly, pooling takes
 place after the file has been transferred to the server and is compared
 to other files in the pool.

 Yet what I have in mind would be don't transfer a file if we already
 have it in the pool, thus drastically reducing the amount of data
 transferred over the net.

 IIRC, the rsync protocol per se should just allow that, but I am unsure
 if BackupPC utilizes it at that level.

 So is this transfer minimization doable with BackupPC?

 Rsync compares only against the last full backup tree on the same host,
 transferring the differences.  Once you get started it will work the way you
 want, but each new machine will transfer everything on the first run and if
 the
 same file is added on many machines, it will be copied separately from each,
 then merged in the pool.   One thing you might do to to help get started
 would
 be to take the server on-site to each office for the first full run, then
 move
 to its permanent location.   Or if you have the bandwidth, perhaps you can
 do
 the initial run over a weekend.

 --
Les Mikesell
 lesmikes...@gmail.com

 --
 This SF.net email is sponsored by Sprint
 What will you do first with EVO, the first 4G phone?
 Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-- 
Sent from my mobile device

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Splitting up large directories.

2010-05-18 Thread Kameleon
Why not just make backuppc take a week or so to traverse the entire pool
using $Conf{BackupPCNightlyPeriod} = 7 or similar? Also you can use the
rsync option to limit bandwidth so it won't kill their outbound connections
using the --bwlimit=XX where XX is the speed in kB/sec to limit to.

On Tue, May 18, 2010 at 4:04 PM, Robin Lee Powell 
rlpow...@digitalkingdom.org wrote:


 A customer we're backing up has a directory with ~500 subdirs and
 hundreds of GiB of data.  We're using BackupPC in rsync+ssh mode.

 As a first pass at breaking that up, I made a bunch of seperate host
 entries like /A/*0, /A/*1, ... (all the dirs have numeric names).

 That seems to select the right files, but it doesn't work because
 BackupPC ends up running a bunch of them at once, hammering that
 customer's machine.

 I could make those into share names, but I'm worried about running
 out of command-line argument space in that case; that is, that
 rsync /A/*0 will at some point in the future expand to a hundred
 or more directories and break.

 What I want to do is have a bunch of shares like [ /A, /A, ...],
 and have something like:

 $Conf{BackupFilesOnly} = {
  /A = /A/*0,
  /A = /A/*1,
  /A = /A/*2,
  }

 But obviously that's not going to work.

 Does anyone have any other way to handle this?

 -Robin

 --
 http://singinst.org/ :  Our last, best hope for a fantastic future.
 Lojban (http://www.lojban.org/): The language in which this parrot
 is dead is ti poi spitaki cu morsi, but this sentence is false
 is na nei.   My personal page: http://www.digitalkingdom.org/rlp/


 --

 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Assistance with long term snapshots and levels

2010-05-06 Thread Kameleon
Also, you can look up the tower of hanoi (spelling?) way of doing
backups. This is what we use on one of our picky servers. It will
basically keep fulls around for up to 3 years or something silly like
that. I can provide a link once I get back to the office in the
morning.

On 5/6/10, Brian Mathis brian.mat...@gmail.com wrote:
 On Thu, May 6, 2010 at 5:53 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On 5/6/2010 4:17 PM, Brian Mathis wrote:
 I'm new to BackupPC and looking to keep long term snapshots.  I've
 been reading through the docs and I think the levels concept will do
 what I want it to, but I still have some outstanding questions.

 I'm looking to use an 84 day cycle where:
 - Day 0 is the full backup (level 0)
 - Day 7, 14, 21 are incrementals relative to D0 (level 7)
 - Day 28 is an incremental to D0 (level 1)
 - Day 35, 42, 29 are incrementalss relative to D28 (level 7)
 - Day 56 is an incremental to D28 (level 2)
 - Day 63, 70, and 77 are incrementals relative to D56 (level 7)
 - Day 84 is the next full backup (level 0)
 There are no daily incrementals

 I think the above example means I need the config:
      FullPeriod = 83.97
      IncrPeriod = 6.97
      IncrLevels = [7, 7, 7, 1, 7, 7, 7, 2, 7, 7, 7]

 Am I missing anything in that config?

 The next piece is incremental expiration.  I would like to keep at
 most six level 7 backups, but keep the level 0, 1, and 2 forever.  I
 only see IncrKeepCnt which seems to be an all-or-nothing expiration
 number that doesn't take levels into account.  Is this possible?

 I expect each full backup to be large, and I'm hoping that this sort
 of scheme will make the best use of disk space and file redundancy,
 while keeping a long timeline of snapshots.

 Suggestions are welcome.

 Before you do something like this, be sure you understand how both
 pooling and rsync work.  Backuppc will keep only copy of any file and
 will replace any others that have identical content with hardlinks.
 There will be next-to-no difference in disk space used by an incremental
 vs. full or a different level of incremental.  Also, when you use rsync,
 there is not a big difference in network bandwidth use between fulls and
 incrementals because it only copies the differences anyway.  Fulls do
 take much more time to complete, though, because even unchanged files
 are read each time for the comparison where incrementals skip any with
 identical timestamps and lengths.

 --
   Les Mikesell
    lesmikes...@gmail.com


 Good point.  I've used rsync many times in the past and our current
 custom solution uses the rsync hard link tricks to get the same kind
 of advantages.  I'm looking to get onto a less custom solution, hence
 backuppc.

 It sounds like you're saying I can probably achieve a similar effect
 by doing a level 0 every 28 days and then doing weekly incrementals
 relative to the level 0, and avoiding the multi-tiered level system I
 outlined?

 I was probably thinking that a full will always take up a full amount
 of disk space, but you're right and I sort of forgot it all goes into
 the pool and gets deduped anyway.

 --

 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-- 
Sent from my mobile device

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Outsourcing backup

2010-05-04 Thread Kameleon
This is very true. We have 4 remote sites, 2 T1's (1.5megabit up and down)
and 2 DSL (1.5megabit down x 256 kilobit up) and have a central backuppc
server located in-house that backs up those 4 server and 10 more in house on
the local fiber network. The only issue was the initial full on the 2 DSL
connections. Since they are local to us, I was able to take the server there
and run the initial full no problem. I use rsync on all the servers. The
total of all the servers is over 2TB with the remote sites having about
700GB of that. So you should be good with rsync and just get the initial
full onsite.

On Tue, May 4, 2010 at 7:50 AM, Les Mikesell lesmikes...@gmail.com wrote:

 Inno wrote:
  I have not a decent internet connectivity and I have more than 500 GB.
 

 With rsync the size doesn't matter much after you get the initial copy
 (which
 you might take on-site and carry over or let it run over a weekend).  The
 bandwidth/time you need nightly would depend on the rate of change in the
 content.

 --
Les Mikesell
lesmikes...@gmail.com


 --
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Outsourcing backup

2010-05-04 Thread Kameleon
What I would do is this:

Setup backuppc for company A at the company B location
Setup backuppc for company B at the company A location

That way you will have total failover redundancy with no lost weeks. So if
say Company A burns or is robbed, etc you have the backup of the servers
at company B location and vise versa. This way you are maximizing your
protection. And you don't have to rely on sneakernet to transfer the
drives. Everything is automagic. This nearly eliminates human interference.
With this setup I would stagger the full backups to different days though.

On Tue, May 4, 2010 at 8:55 AM, Inno in...@voila.fr wrote:

 I will test it (but I want a full backup every week). We have a connection
 with 250 KB/s.

 My goal is not to centralize backups so I wondered if my solution will be
 good.

 Imagine if the company burns or HDD breaks. For me it's better to have two
 HDD (at different locations) with a loss of one week if its burn. Do you
 understand why I want exchange between both every week and why I imagine
 this steps ? (maybe I'm not clear :-S)

 Thanks.

  Message du 04/05/10 à 15h23
  De : Kameleon kameleo...@gmail.com
  A : General list for user discussion,brquestions and support 
 backuppc-users@lists.sourceforge.net
  Copie à :
  Objet : Re: [BackupPC-users] Outsourcing backup
 
  This is very true. We have 4 remote sites, 2 T1's (1.5megabit up and
 down)
  and 2 DSL (1.5megabit down x 256 kilobit up) and have a central backuppc
  server located in-house that backs up those 4 server and 10 more in house
 on
  the local fiber network. The only issue was the initial full on the 2 DSL
  connections. Since they are local to us, I was able to take the server
 there
  and run the initial full no problem. I use rsync on all the servers. The
  total of all the servers is over 2TB with the remote sites having about
  700GB of that. So you should be good with rsync and just get the initial
  full onsite.
 
  On Tue, May 4, 2010 at 7:50 AM, Les Mikesell lesmikes...@gmail.com
 wrote:
 
   Inno wrote:
I have not a decent internet connectivity and I have more than 500
 GB.
   
  
   With rsync the size doesn't matter much after you get the initial copy
   (which
   you might take on-site and carry over or let it run over a weekend).
  The
   bandwidth/time you need nightly would depend on the rate of change in
 the
   content.
  
   --
  Les Mikesell
  lesmikes...@gmail.com
  
  
  
 --
   ___
   BackupPC-users mailing list
   BackupPC-users@lists.sourceforge.net
   List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
   Wiki:http://backuppc.wiki.sourceforge.net
   Project: http://backuppc.sourceforge.net/
  
  
  [ (pas de nom de fichier) (0.1 Ko) ]
  [ (pas de nom de fichier) (0.3 Ko) ]

 

  Suivez toute l'actualité de la Nouvelle Star, retrouvez les dépêches, les
 photos et les vidéos :
 http://evenementiel.voila.fr/Nouvelle-Star-2010/infos





 --
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Outsourcing backup

2010-05-04 Thread Kameleon
250k is plenty for this scenario. We run nightly incrementals with weekly
fulls on all of our servers including the 4 remote sites. The  2 slowest of
our remotes are 256k up and that is still sufficient pending you don't have
alot of data change.

On Tue, May 4, 2010 at 9:38 AM, Inno in...@voila.fr wrote:

 It's a very good idea. But with this solution I am forced to do a single
 full then incrementals. And I'm not sure the speed of upload will be
 sufficient.

 And configure BackupPC with :
 FullPeriod = -1
 FullKeepCnt = 1
 FullKeepCntMin = 1
 FullAgeMax = 90
 --
 IncrPeriod = 1
 IncrKeepCnt = 8
 IncrKeepCntMin = 4
 IncrAgeMax = 30
 IncrLevels = 1, 2, 3, 4
 --

 If your solution is not possible, do you think mine will work without
 create problem in BackupPC ?

 Tanks a lot.



  Message du 04/05/10 à 16h15
  De : Kameleon kameleo...@gmail.com
  A : in...@voila.fr, General list for user discussion, questions and
 support backuppc-users@lists.sourceforge.net
  Copie à :
  Objet : Re: [BackupPC-users] Outsourcing backup
 
  What I would do is this:
 
  Setup backuppc for company A at the company B location
  Setup backuppc for company B at the company A location
 
  That way you will have total failover redundancy with no lost weeks. So
 if
  say Company A burns or is robbed, etc you have the backup of the
 servers
  at company B location and vise versa. This way you are maximizing your
  protection. And you don't have to rely on sneakernet to transfer the
  drives. Everything is automagic. This nearly eliminates human
 interference.
  With this setup I would stagger the full backups to different days
 though.
 
  On Tue, May 4, 2010 at 8:55 AM, Inno in...@voila.fr wrote:
 
   I will test it (but I want a full backup every week). We have a
 connection
   with 250 KB/s.
  
   My goal is not to centralize backups so I wondered if my solution will
 be
   good.
  
   Imagine if the company burns or HDD breaks. For me it's better to have
 two
   HDD (at different locations) with a loss of one week if its burn. Do
 you
   understand why I want exchange between both every week and why I
 imagine
   this steps ? (maybe I'm not clear :-S)
  
   Thanks.
  
Message du 04/05/10 à 15h23
De : Kameleon kameleo...@gmail.com
A : General list for user discussion,brquestions and support
 
   backuppc-users@lists.sourceforge.net
Copie à :
Objet : Re: [BackupPC-users] Outsourcing backup
   
This is very true. We have 4 remote sites, 2 T1's (1.5megabit up and
   down)
and 2 DSL (1.5megabit down x 256 kilobit up) and have a central
 backuppc
server located in-house that backs up those 4 server and 10 more in
 house
   on
the local fiber network. The only issue was the initial full on the 2
 DSL
connections. Since they are local to us, I was able to take the
 server
   there
and run the initial full no problem. I use rsync on all the servers.
 The
total of all the servers is over 2TB with the remote sites having
 about
700GB of that. So you should be good with rsync and just get the
 initial
full onsite.
   
On Tue, May 4, 2010 at 7:50 AM, Les Mikesell lesmikes...@gmail.com
   wrote:
   
 Inno wrote:
  I have not a decent internet connectivity and I have more than
 500
   GB.
 

 With rsync the size doesn't matter much after you get the initial
 copy
 (which
 you might take on-site and carry over or let it run over a
 weekend).
The
 bandwidth/time you need nightly would depend on the rate of change
 in
   the
 content.

 --
Les Mikesell
lesmikes...@gmail.com



  
 --
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


[ (pas de nom de fichier) (0.1 Ko) ]
[ (pas de nom de fichier) (0.3 Ko) ]
  
   
  
Suivez toute l'actualité de la Nouvelle Star, retrouvez les dépêches,
 les
   photos et les vidéos :
   http://evenementiel.voila.fr/Nouvelle-Star-2010/infos
  
  
  
  
  
  
 --
   ___
   BackupPC-users mailing list
   BackupPC-users@lists.sourceforge.net
   List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
   Wiki:http://backuppc.wiki.sourceforge.net
   Project: http://backuppc.sourceforge.net/
  
 

 

  Suivez toute l'actualité de la Nouvelle Star, retrouvez les dépêches, les
 photos et les vidéos :
 http://evenementiel.voila.fr/Nouvelle-Star-2010/infos

[BackupPC-users] Using backuppc with LVM snapshots to reduce rsync server load.

2010-04-29 Thread Kameleon
I have been using backuppc for quite some time now. I use it at home in a
xen domu to back up all my servers/machines at home and here at work I use
it on a physical host to backup all of our servers. I use rsync method of
backing up where applicable and rsyncd on the windows machines.

One thing I have noticed lately here at work is the server being backed up
has its load go sky high when a backup is running. I have thought about ways
to avoid this if possible. I remember reading, somewhere I cannot find
again,  that someone did a similar method to what I am thinking may help.
Most of our servers will be Xen domu's running in logical volumes located on
the dom0 hypervisor. What I was thining may be possible is running an lvm
snapshot of the virtual machines lvm partition, mount the snapshot, and run
backuppc against that snapshot for that specific host.

It should be straight forward with the exception of a few of our VM's would
have nested lv's. For example, on a CentOS VM installed on the dom0 lv
named /dev/xenvg/domu1 it will then have a swap and a root lv that it sees
and runs from. I have used tools like kpartx before to split these nested
lv's into readable data so that may be a paart of the solution. Would anyone
have any experience with similar setups and can guide me in the right
direction?
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best Linux base

2010-04-29 Thread Kameleon
On Thu, Apr 29, 2010 at 3:22 PM, Les Mikesell lesmikes...@gmail.com wrote:

 On 4/29/2010 2:42 PM, Eddie Gonzales wrote:
  My issues were mainly do to my lack of linux knowledge but now it's
  getting the exim mail service on my debian to send me emails. I know I
  will rebuild if I put this in production so wanted to know If I should
  change bases if I do to make things easier to get all working.

 I could probably help if it were Centos and sendmail, but it's probably
 just a matter of asking in the right place to get exim help.  You may
 have two issues - one is getting the mailer to send it and the other is
 setting a 'From:' that the receiving host will accept - most mailers
 these days will reject anything that doesn't have a domain that DNS will
 resolve in the sender's From: address.

 If I were about to set up a new machine, I'd consider waiting for the
 ubuntu 10.04 LTS release that is due in a week or so.

 You mean released today. ;)

 --
Les Mikesell
lesmikes...@gmail.com




 --
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrating backup machines

2010-04-23 Thread Kameleon
I too had the same question. However, I did think about mounting an iscsi
volume and create a secondary LV on it to move the snapshot to. Other
options would be to boot the server with something like clonezilla and the
target machine with the same and do a direct copy that way. I am actually
looking at moving our production backuppc machine to a Xen domu on top of a
DRBD LVM so I can have fully redundant live failover. Plus that makes it
just that much easier to move between machines if need be.

On Fri, Apr 23, 2010 at 4:33 PM, B. Alexander stor...@gmail.com wrote:

 Thanks Tyler,

 However, for snapshotting to work, don't you have to have at least as much
 space for the snapshot as you do for the original partition? I currently am
 using nearly 60% of the VG just for backups, which is the root of the
 problem. It wasn't apparent in my scanning of the article (I'll go back and
 read for content in a bit).

 --b


 On Fri, Apr 23, 2010 at 4:44 PM, Tyler J. Wagner ty...@tolaris.comwrote:

 On Wednesday 21 April 2010 06:03:08 Tyler J. Wagner wrote:
  I am currently in the process of doing this in two steps:
 
  1. Moving the cpool to a partition with LVM, so as to be able to make
   snapshot binary backups in future.
 
  2. Copying the snapshot over the network to a backup server.
 
  I'll blog it soon and post here.  In short, the fastest way to duplicate
   your pool is: dd if=/dev/partition1 of=/dev/partition2

 As promised, how to use LVM to clone a BackupPC pool.


 http://www.tolaris.com/2010/04/23/using-lvm-to-make-a-live-copy-of-a-backuppc-
 pool/http://www.tolaris.com/2010/04/23/using-lvm-to-make-a-live-copy-of-a-backuppc-%0Apool/

 Regards,
 Tyler

 --
 Before we got into this war there were countless 'military experts'
 and intelligence analysts that told us this was a good idea, that we
 had to do it.  That presented their information, and were so terribly
 wrong.  These people are still affecting public policy.  They are still
 considered experts.  I'm sorry, shouldn't there be a rule or law that
 says if you fuck things up so badly, you can no longer be considered
 an expert?
   -- Tim Robbins


 --
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




 --

 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-02-18 Thread Kameleon
I can speak from experience on the matter of a slow link back to the
backuppc server. We have mutliple sites that we backup to a central
backuppc server. 2 of the sites have a 256k upload and 2 others are
T1's. The only issue is getting the initial full. What I did on the 2
local (256k) sites is take the backuppc server to the site and run the
initial full backup. After that everything is able to run without
issue over any link we have. So in that regard you should be fine.



On 2/18/10, Mike Bydalek mbyda...@compunetconsulting.com wrote:
 On Thu, Feb 18, 2010 at 12:04 PM, John Rouillard
 rouilj-backu...@renesys.com wrote:
 On Thu, Feb 18, 2010 at 07:51:13AM -0700, Mike Bydalek wrote:
 My question is, why did backups 13 and 14 backup all that data?  Same
 with 2 and 7 for that matter.

 What level are your incremental backups? if backup 2 was at level 1
 and backup 7 was at level 1 (you use levels 1 2 3 4 5 6) and backup 13
 is back at level 1 that's kind of what I would expect since level 1
 backs up everything since since the last full.

 However 14 should be quite a bit less unelss it was also a level 1.

 Below is my config.  I'm still messing with the IncrLevels and have a
 super short period just to get some increments and all that going.
  [...]
 $Conf{IncrLevels} = [
   '1',
   '2',
   '3',
   '4',
   '5',
   '6'
 ];


 After re-reading the documentation for {IncrLevels} again the
 configuration settings are starting to make sense.  The only question
 I have left is, does creating a new full backup *have* to do the
 entire full backup again?  Can't it just perform an increment and
 merge it to create a full?  The reason I ask is I'm planning on moving
 this server off-site so it'll go over a WAN.  Sending 250G over a 1M
 connection every week or two doesn't sound fun!  Is this what
 $Conf{IncrFill} is supposed to handle?

 What I want is to basically perform a backup every day and keep 30
 days of backups without doing another 'full' backup.  I don't really
 care how many 'full' backups I have as long as I can restore from 29
 days ago.  Would these settings do the trick for that?

 $Conf{FullPeriod}  = 30;
 $Conf{IncrPeriod}  = 1;
 $Conf{IncrKeepCnt} = 30;
 $Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6 .. 30];
 $Conf{IncrFill} = 1;

 This may start to get off topic, so I can start a new thread if needed.

 Thanks for your help!

 Regards,
 Mike

 --
 Download Intel#174; Parallel Studio Eval
 Try the new software tools for yourself. Speed compiling, find bugs
 proactively, and fine-tune applications for parallel performance.
 See why Intel Parallel Studio got high marks during beta.
 http://p.sf.net/sfu/intel-sw-dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-- 
Sent from my mobile device

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude not working as expected

2010-02-09 Thread Kameleon
Anything that you want to be explicitly excluded you will need to put the
full path in the excludes. Otherwise if it matches anywhere else in the
filesystem it will be excluded.

On Tue, Feb 9, 2010 at 5:03 PM, Mark Wass m...@market-analyst.com wrote:

 Hi Bowie

 Thanks for clearing that up. So does that mean I should also amend these
 other excludes by putting a forward slash in front?

 'etc/fstab', == '/etc/fstab',

 'var/cache/apt/archives/*', == '/var/cache/apt/archives/*',

 Thanks

 Mark

 -Original Message-
 From: Bowie Bailey [mailto:bowie_bai...@buc.com]
 Sent: Wednesday, 10 February 2010 12:40 AM
 To: backuppc-users@lists.sourceforge.net
 Subject: Re: [BackupPC-users] Exclude not working as expected

 Mark Wass wrote:
 
  Hi Guys
 
  I have a config file that looks likes this:
 
  $Conf{BackupFilesExclude} = {
 
  '/' = [
 
  'dev',
 
  'proc',
 
  'sys',
 
  'tmp',
 
  'var/lib/mysql',
 
  'etc/fstab',
 
  'var/log/mysql/mysql-bin.*',
 
  'var/log/apache2/*',
 
  'shares',
 
  'var/lib/cvs',
 
  'var/lib/cvs-old',
 
  'var/cache/apt/archives/*',
 
  'var/log/samba/*',
 
  'var/log/installer/*',
 
  'var/log/apt/*',
 
  'var/log/samba/*',
 
  'HDD2'
 
  ]
 
  };
 
  $Conf{BackupFilesOnly} = {};
 
  $Conf{ClientNameAlias} = '192.168.1.3';
 
  $Conf{RsyncShareName} = [
 
  '/'
 
  ];
 
  I've got an exclude in there for proc, the problem I'm getting is
  that the proc is also getting excluded from /opt/webmin/proc I
  only want the proc directly on the root / share to be excluded. How
  can I make sure the no other proc folders are excluded?
 

 You are telling it that you want all files/directories called 'proc' to
 be excluded. If you only want to exclude '/proc', then list it that way.
 You probably want to do the same thing with most of the rest of your
 list unless you are also wanting to exclude all 'tmp' directories, etc.

 $Conf{BackupFilesExclude} = {
 '/' = [
 '/dev',
 '/proc',
 '/sys',
 '/tmp',
 ...
 'HDD2'
 ]
 };

 --
 Bowie


 
 --
 The Planet: dedicated and managed hosting, cloud storage, colocation
 Stay online with enterprise data centers and the best network in the
 business
 Choose flexible plans and management services without long-term contracts
 Personal 24x7 support from experience hosting pros just a phone call away.
 http://p.sf.net/sfu/theplanet-com
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



 --
 SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
 Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
 http://p.sf.net/sfu/solaris-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Thinking aloud about backup rotation

2010-01-26 Thread Kameleon
If I catch what you are thinking of Backuppc already does that. With rsync
it checks the updated files and only backs up those that have changed since
last backup. Unless I am missing what you are trying to accomplish.



On Tue, Jan 26, 2010 at 7:14 AM, PD Support
support-li...@petdoctors.co.ukwrote:

 If I have a source/dest folder with a week's worth of backups in it
 labelled
 Mon_backup.bak, Tue_backup.bak etc., the next backup to be made on (say)
 Weds will technically be the diff between the Tues copy on the BackupPC
 server and the Weds copy on the remote server.

 It would be great if BackupPC had some way of 'knowing' that a folder
 contained files of a cyclic nature like this and could do the relevant
 sync/diff backup by looking at yesterday's file??

 Just wondered!?





 --
 The Planet: dedicated and managed hosting, cloud storage, colocation
 Stay online with enterprise data centers and the best network in the
 business
 Choose flexible plans and management services without long-term contracts
 Personal 24x7 support from experience hosting pros just a phone call away.
 http://p.sf.net/sfu/theplanet-com
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow link options

2009-12-18 Thread Kameleon
I was able to get a full backup on 3 of the 4 servers. Two of them I was
able to take the backuppc machine to the location and the third it was small
enough to be able to complete over the T1. However we have one remaining
remote location that has approximately 309GB of data that needs backing up.
This initial full will take anywhere in the range of 10-20 DAYS according to
my numbers.

However, we do have an rsync of that server on another in-house server. It
is a complete rsync minus a few directories like /proc /var etc. So we don't
have to take the backuppc machine 3 hours away, does anyone know if it would
be possible to somehow setup backuppc to use the complete existing in-house
rsync as the base for the initial full backup?

The setup on the rsync backup is that the entire backup is stored in
/server/servername. Since backuppc stores the files in relation to the root
directory how would I move the files from
/var/lib/backuppc/pc/rsync-server/0/f%2fserver%2fremoteserver to
/var/lib/backuppc/pc/remoteserver/0/f%2f? Should it be as simple as moving
the folders? Or is this even possible?



On Wed, Dec 16, 2009 at 3:19 PM, Chris Robertson crobert...@gci.net wrote:

 Kameleon wrote:
  I have a few remote sites I am wanting to backup using backuppc.
  However, two are on slow DSL connections and the other 2 are on T1's.
  I did some math and roughly figured that the DSL connections, having a
  256k upload, could do approximately 108MB/hour of transfer. With these
  clients having around 65GB each that would take FOREVER!!!
 
  I am able to take the backuppc server to 2 of the remote locations
  (the DSL ones) and put it on the LAN with the server to be backed up
  to get the initial full backup. What I am wondering is this: What do
  others do with slow links like this? I need a full backup at least
  weekly and incrimentals nightly. Is there an easy way around this?

 The feasibility of this depends entirely on the rate of change of the
 backup data.  Once you get the initial full, rsync backups only transfer
 changes.  Have a look at the documentation
 (http://backuppc.sourceforge.net/faq/BackupPC.html#backup_basics) for
 more details.

 
  Thanks in advance.

 Chris



 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Combine multiple Backuppc servers into one

2009-12-18 Thread Kameleon
To simplify what I am trying to accomplish I will explain it this way:

We currently have 2 backuppc servers. Both have 2x 1TB drives in a Raid1
array. What I want to do is move all the drives into one machine and set it
up as a Raid5. That would give us 3TB usable rather than 2TB usable. Hence
why I need to move everything to one setup.

Thanks for any guidance.

On Thu, Dec 17, 2009 at 7:35 PM, Kameleon kameleo...@gmail.com wrote:

 Thanks for that idea but that is not an option. I need to combine both
 backuppc machines into one physical backuppc machine. Both servers have 2
 1TB drives in a raid1 if that matters.



 On Thu, Dec 17, 2009 at 6:52 PM, Shawn Perry redmo...@comcast.net wrote:

 You can use a virtual machine for each (I am using openvz via Proxmox
 with my backuppc, and it works perfectly).

 On Thu, Dec 17, 2009 at 2:35 PM, Kameleon kameleo...@gmail.com wrote:
  I have multiple backuppc servers that I would like to combine into one
  physical machine. Each of them have different clients they were backing
 up.
  But in an effort to save power and heat, we are trying to consolidate
  machines. Is there an easy way to combine multiple backuppc machines
 into
  one existing one?
 
 
 --
  This SF.Net email is sponsored by the Verizon Developer Community
  Take advantage of Verizon's best-in-class app development support
  A streamlined, 14 day to market process makes app distribution fast and
 easy
  Join now and get one step closer to millions of Verizon customers
  http://p.sf.net/sfu/verizon-dev2dev
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
 
 


 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Combine multiple Backuppc servers into one

2009-12-18 Thread Kameleon
I was afraid of that. Thanks for the reply. It may make better sense to have
multiple servers for a bit anyways. Hopefully soon we will be getting a
dedicated Dell server for this so I can just set it up to do backups and
leave the current ones in archive mode until at such a time that the data is
outdated.

Thank you very much.



On Fri, Dec 18, 2009 at 3:27 PM, Les Mikesell lesmikes...@gmail.com wrote:

 Kameleon wrote:
  To simplify what I am trying to accomplish I will explain it this way:
 
  We currently have 2 backuppc servers. Both have 2x 1TB drives in a Raid1
  array. What I want to do is move all the drives into one machine and set
  it up as a Raid5. That would give us 3TB usable rather than 2TB usable.
  Hence why I need to move everything to one setup.
 
  Thanks for any guidance.

 There's no good way to merge existing pooled files if that is what you
 are asking.  Or to convert a Raid1 to a Raid5 without losing the
 contents. I'd recommend building a new setup the way you want and
 holding on to the old systems for as long as you might have a need to
 restore from their older history, or perhaps generating tar images that
 you can store elsewhere with BackupPC_tarCreate.  Once the new system
 has collected the history you need you can re-use the old drives.

 --
   Les Mikesell
lesmikes...@gmail.com


 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Combine multiple Backuppc servers into one

2009-12-17 Thread Kameleon
I have multiple backuppc servers that I would like to combine into one
physical machine. Each of them have different clients they were backing up.
But in an effort to save power and heat, we are trying to consolidate
machines. Is there an easy way to combine multiple backuppc machines into
one existing one?
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Combine multiple Backuppc servers into one

2009-12-17 Thread Kameleon
Thanks for that idea but that is not an option. I need to combine both
backuppc machines into one physical backuppc machine. Both servers have 2
1TB drives in a raid1 if that matters.



On Thu, Dec 17, 2009 at 6:52 PM, Shawn Perry redmo...@comcast.net wrote:

 You can use a virtual machine for each (I am using openvz via Proxmox
 with my backuppc, and it works perfectly).

 On Thu, Dec 17, 2009 at 2:35 PM, Kameleon kameleo...@gmail.com wrote:
  I have multiple backuppc servers that I would like to combine into one
  physical machine. Each of them have different clients they were backing
 up.
  But in an effort to save power and heat, we are trying to consolidate
  machines. Is there an easy way to combine multiple backuppc machines into
  one existing one?
 
 
 --
  This SF.Net email is sponsored by the Verizon Developer Community
  Take advantage of Verizon's best-in-class app development support
  A streamlined, 14 day to market process makes app distribution fast and
 easy
  Join now and get one step closer to millions of Verizon customers
  http://p.sf.net/sfu/verizon-dev2dev
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
 
 


 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Slow link options

2009-12-16 Thread Kameleon
I have a few remote sites I am wanting to backup using backuppc. However,
two are on slow DSL connections and the other 2 are on T1's. I did some math
and roughly figured that the DSL connections, having a 256k upload, could do
approximately 108MB/hour of transfer. With these clients having around 65GB
each that would take FOREVER!!!

I am able to take the backuppc server to 2 of the remote locations (the DSL
ones) and put it on the LAN with the server to be backed up to get the
initial full backup. What I am wondering is this: What do others do with
slow links like this? I need a full backup at least weekly and incrimentals
nightly. Is there an easy way around this?

Thanks in advance.
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd on vista 64 bit

2009-12-11 Thread Kameleon
Just for future reference:

I was just now able to get this working. I simply renamed the original
cygrunsrv.exe and cygwin1.dll with a -org in the name to differentiate with
the new version. I then downloaded the cygrunsrv stuff out of the cygwin
package and replaced those two files in the c:\rsyncd folder. Started the
service and WHAM!!! It works. thanks to all who assisted me with this issue.
While not a backuppc issue directly, it is helpful to those of us that
wanted to use the rsyncd setup on windows. I have the 2 files I replaced if
anyone else runs into this issue.



On Wed, Dec 9, 2009 at 2:02 AM, Erik Hjertén erik.hjer...@companion.sewrote:

 Kameleon skrev:
  I am trying to setup the standalone rsyncd from the backuppc downloads
  page on a 64 bit vista machine. I have done it already on about 5 32
  bit machines. Only this one fails to start the service. I see no error
  other than it trys to run and then nothing. Has anyone else ran into
  this issue and found a workaround? I don't want to use smb if I can
  help it. Thanks in advance.
 I'm running cygwin bundled in Deltacopy on Vista 64. I'm not sure if the
 Deltacopy team altered the cygwin-dlls in some way, but it works very
 well. I'm doing daily backups to a Linux based server running Backuppc
 via rsyncd. Deltacopy can be found here:
 http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp

 Kind regards
 /Erik




 --
 Return on Information:
 Google Enterprise Search pays you back
 Get the facts.
 http://p.sf.net/sfu/google-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsyncd on vista 64 bit

2009-12-08 Thread Kameleon
I am trying to setup the standalone rsyncd from the backuppc downloads page
on a 64 bit vista machine. I have done it already on about 5 32 bit
machines. Only this one fails to start the service. I see no error other
than it trys to run and then nothing. Has anyone else ran into this issue
and found a workaround? I don't want to use smb if I can help it. Thanks in
advance.
--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd on vista 64 bit

2009-12-08 Thread Kameleon
Also, I may not, I did the trick to run as administrator and no go. I get
the error:

cygrunsrv: Error starting a service: QueryServiceStatus:  Win32 error 1053:
The service did not respond to the start or control request in a timely
fashion.

If that helps any. I am making sure it is not a gremlin by deleting the
service and rebooting. Any other ideas? This works perfectly on 32-bit.



On Tue, Dec 8, 2009 at 2:50 PM, Alan McKay alan.mc...@gmail.com wrote:

 I definitely have had this issue on my machine at home and never did
 resolve it.

 Fortunately I have Ubuntu Linux running on it now though, and no more
 problems :)

 But I am likely to hit this before too long here at work so will watch
 this thread eagerly


 --
 “Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of In Defense of Food


 --
 Return on Information:
 Google Enterprise Search pays you back
 Get the facts.
 http://p.sf.net/sfu/google-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Got fatal error during xfer

2009-12-03 Thread Kameleon
Update:

We moved the backuppc server to the same room as the client server and on
the same switch. The backup still failed, but got alot further. The error is
the same as last time:

Read EOF:
Tried again: got 0 bytes
Child is aborting
Can't write 33792 bytes to socket
Sending csums, cnt = 250243, phase = 1
Done: 26416 files, 31698685174 bytes
Got fatal error during xfer (aborted by signal=PIPE)
Backup aborted by user signal
Saving this as a partial backup


So I ran the exact command as the backuppc user that it uses according to
the log file and ran it manually. Then I straced the pid on the client
machine and got this:

select(1, [0], [], NULL, {45, 57000})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {29, 17})
read(0, , 4)  = 0
write(2, rsync: connection unexpectedly c..., 72) = -1 EPIPE (Broken pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
rt_sigaction(SIGUSR1, {0x1, [], 0}, NULL, 8) = 0
rt_sigaction(SIGUSR2, {0x1, [], 0}, NULL, 8) = 0
write(2, rsync error: errors with program..., 83) = -1 EPIPE (Broken pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
rt_sigaction(SIGUSR1, {0x1, [], 0}, NULL, 8) = 0
rt_sigaction(SIGUSR2, {0x1, [], 0}, NULL, 8) = 0
gettimeofday({1259860374, 861962}, NULL) = 0
select(0, NULL, NULL, NULL, {0, 10}) = 0 (Timeout)
gettimeofday({1259860374, 961981}, NULL) = 0
exit_group(13)  = ?
Process 13532 detached

It still appears the problem is on the remote server since it is exiting.
The client server is a Dell poweredge so I would hope it wasn't hardware
related. Anything else I can check before I give it a swift kick in the
pants?

Donny B.

On Wed, Dec 2, 2009 at 3:56 PM, Kameleon kameleo...@gmail.com wrote:

 I did some more testing watching top and such on both backuppc server and
 the client. Both had plenty of memory and such. The thing we are thinking is
 possibly the cable or switch between the two. Tomorrow we plan to relocate
 the backuppc server from it's current location and plug it directly in via
 crossover cable to the server. That will eliminate the networking gear all
 except the network intercafe cards.

 Thanks for the input.



 On Wed, Dec 2, 2009 at 3:10 PM, Les Mikesell lesmikes...@gmail.comwrote:

 Kameleon wrote:
  I do apologize. The backuppc server is Ubuntu 9.10 and the server being
  backed up is Centos 5.4. I have changed everything back to rsync and
  tried a manual full backup (since that is what it was attempting to do
  when it failed) I ran strace on the PID of rsync on the remote server
  being backed up. The last few lines of the output are below.
 [...]
   read(0,
 
 \256\374O\362\350\224\30\3101(Y\8\3279z\300nt\10*\367\26+\355\364\245W)/\224\301...,
  8184) = 8184
  select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {60, 0})
  read(0,
 
 \314\326\4\242P\345\3\332\245b\317\363\4\253'\307\3056Y\307X\313\364I\5\3746\fH\340\212w...,
  8184) = 1056
  select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {58, 296000})
  read(0, , 8184)   = 0
  select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
  write(1, O\0\0\10rsync: connection unexpected..., 83) = -1 EPIPE
  (Broken pipe)

 Looks like something is wrong on the target side, dropping the
 connection.  File system problems?  Out of memory?  Are both machines on
 the same LAN or could there be a problem with networking equipment
 between them?


  Backuppc shows the following error when it fails:
 
  2009-12-02 14:17:58 full backup started for directory /; updating
 partial #4
  2009-12-02 14:24:59 Aborting backup up after signal PIPE
  2009-12-02 14:25:00 Got fatal error during xfer (aborted by signal=PIPE)

 This doesn't tell you anything except that the other end died.


  remote machine: rsync  version 3.0.6  protocol version 30
  backuppc: rsync  version 3.0.6  protocol version 30

 Backuppc doesn't use the rsync binary on the server side - it has its
 own implementation in perl.  But it looks like things started OK and
 then either the remote side quite or the network connection had a problem.


 --
Les Mikesell
lesmikes...@gmail.com


 --
 Join us December 9, 2009 for the Red Hat Virtual Experience,
 a free event focused on virtualization and cloud computing.
 Attend in-depth sessions from your desk. Your couch. Anywhere.
 http://p.sf.net/sfu/redhat-sfdev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users

Re: [BackupPC-users] Got fatal error during xfer

2009-12-03 Thread Kameleon
Not a client timeout as the setting is set at 72000 and it will die within
30 minutes if it is going to die. We are going to update the client server
and run a file system check to see what is going on there. Hopefully we can
find something.



On Thu, Dec 3, 2009 at 2:13 PM, Tyler J. Wagner ty...@tolaris.com wrote:

 Are you sure this isn't a ClientTimeout problem?  Try increasing it and see
 if
 the backup runs for longer.

 Tyler

 On Thursday 03 Dec 2009 17:25:31 Kameleon wrote:
  Update:
 
  We moved the backuppc server to the same room as the client server and on
  the same switch. The backup still failed, but got alot further. The error
   is the same as last time:
 
  Read EOF:
  Tried again: got 0 bytes
  Child is aborting
  Can't write 33792 bytes to socket
  Sending csums, cnt = 250243, phase = 1
  Done: 26416 files, 31698685174 bytes
  Got fatal error during xfer (aborted by signal=PIPE)
  Backup aborted by user signal
  Saving this as a partial backup
 
 
  So I ran the exact command as the backuppc user that it uses according to
  the log file and ran it manually. Then I straced the pid on the client
  machine and got this:
 
  select(1, [0], [], NULL, {45, 57000})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {29, 17})
  read(0, , 4)  = 0
  write(2, rsync: connection unexpectedly c..., 72) = -1 EPIPE (Broken
   pipe) --- SIGPIPE (Broken pipe) @ 0 (0) ---
  rt_sigaction(SIGUSR1, {0x1, [], 0}, NULL, 8) = 0
  rt_sigaction(SIGUSR2, {0x1, [], 0}, NULL, 8) = 0
  write(2, rsync error: errors with program..., 83) = -1 EPIPE (Broken
   pipe) --- SIGPIPE (Broken pipe) @ 0 (0) ---
  rt_sigaction(SIGUSR1, {0x1, [], 0}, NULL, 8) = 0
  rt_sigaction(SIGUSR2, {0x1, [], 0}, NULL, 8) = 0
  gettimeofday({1259860374, 861962}, NULL) = 0
  select(0, NULL, NULL, NULL, {0, 10}) = 0 (Timeout)
  gettimeofday({1259860374, 961981}, NULL) = 0
  exit_group(13)  = ?
  Process 13532 detached
 
  It still appears the problem is on the remote server since it is exiting.
  The client server is a Dell poweredge so I would hope it wasn't hardware
  related. Anything else I can check before I give it a swift kick in the
  pants?
 
  Donny B.
 
  On Wed, Dec 2, 2009 at 3:56 PM, Kameleon kameleo...@gmail.com wrote:
   I did some more testing watching top and such on both backuppc server
 and
   the client. Both had plenty of memory and such. The thing we are
 thinking
   is possibly the cable or switch between the two. Tomorrow we plan to
   relocate the backuppc server from it's current location and plug it
   directly in via crossover cable to the server. That will eliminate the
   networking gear all except the network intercafe cards.
  
   Thanks for the input.
  
   On Wed, Dec 2, 2009 at 3:10 PM, Les Mikesell lesmikes...@gmail.com
 wrote:
   Kameleon wrote:
I do apologize. The backuppc server is Ubuntu 9.10 and the server
being backed up is Centos 5.4. I have changed everything back to
 rsync
and tried a manual full backup (since that is what it was attempting
to do when it failed) I ran strace on the PID of rsync on the remote
server being backed up. The last few lines of the output are below.
  
   [...]
  
 read(0,
  
  
 \256\374O\362\350\224\30\3101(Y\8\3279z\300nt\10*\367\26+\355\364\245W
  )/\224\301...,
  
8184) = 8184
select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {60, 0})
read(0,
  
  
 \314\326\4\242P\345\3\332\245b\317\363\4\253'\307\3056Y\307X\313\364I\5
  \3746\fH\340\212w...,
  
8184) = 1056
select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {58,
296000}) read(0, , 8184)   = 0
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, O\0\0\10rsync: connection unexpected..., 83) = -1 EPIPE
(Broken pipe)
  
   Looks like something is wrong on the target side, dropping the
   connection.  File system problems?  Out of memory?  Are both machines
 on
   the same LAN or could there be a problem with networking equipment
   between them?
  
Backuppc shows the following error when it fails:
   
2009-12-02 14:17:58 full backup started for directory /; updating
  
   partial #4
  
2009-12-02 14:24:59 Aborting backup up after signal PIPE
2009-12-02 14:25:00 Got fatal error during xfer (aborted by
signal=PIPE)
  
   This doesn't tell you anything except that the other end died.
  
remote machine: rsync  version 3.0.6  protocol version 30
backuppc: rsync

[BackupPC-users] Got fatal error during xfer

2009-12-02 Thread Kameleon
I have an issue with one of my backuppc servers. This server is used to only
backup one server and has been doing flawless since it was installed about a
week ago. Until last night

I checked it this morning to find the above error in the host summary. I did
some checking and even manually ran the backup with an strace on the process
on the remote server. Every time it would fail. I had a similar issue on my
home backuppc server on a few servers and switching them from rsync to
rsyncd and changing the needed settings did the trick. Not so much with this
one. Here are the last few lines of the log when doing both rsync and
rsyncd:

via rsync:

2009-12-02 10:18:04 full backup started for directory /; updating partial #4
2009-12-02 10:25:09 Aborting backup up after signal PIPE
2009-12-02 10:25:10 Got fatal error during xfer (aborted by signal=PIPE)
2009-12-02 10:25:10 Saved partial dump 4

Via rsyncd:

2009-12-02 11:09:00 full backup started for directory main; updating partial
#4
2009-12-02 11:15:01 Aborting backup up after signal PIPE
2009-12-02 11:15:02 Got fatal error during xfer (aborted by signal=PIPE)

I am at a loss. Something is causing the remote end to fail. I did a little
research and found that if doing backup via rsync and ssh it could be a
large file causing it, as in over 2GB. I looked in the directories being
backed up and there are a few files larger than 2GB that are being backed
up. Is there a workaround for this?

Donny B.
--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Got fatal error during xfer

2009-12-02 Thread Kameleon
I do apologize. The backuppc server is Ubuntu 9.10 and the server being
backed up is Centos 5.4. I have changed everything back to rsync and tried a
manual full backup (since that is what it was attempting to do when it
failed) I ran strace on the PID of rsync on the remote server being backed
up. The last few lines of the output are below.

read(0,
\276\202\271p\351\235\273\357g\36\274~[\4Y\265z\276,\321\344`\246\266\341\276\3158\264RA\366...,
8184) = 8184
select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {60, 0})
read(0,
4\274\370\2620\20\6\312\244M\22\353\r\304{\211\32h\276\310\\\256\303\217\4\217X\274\237\6z\311...,
8184) = 8184
select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {60, 0})
read(0,
\303\255\0374n\360D\355dvvi\254\264\333\224]lX\n*\336Y*\205\3510\374`t\210\303...,
8184) = 8184
select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {60, 0})
read(0,
\256\374O\362\350\224\30\3101(Y\8\3279z\300nt\10*\367\26+\355\364\245W)/\224\301...,
8184) = 8184
select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {60, 0})
read(0,
\314\326\4\242P\345\3\332\245b\317\363\4\253'\307\3056Y\307X\313\364I\5\3746\fH\340\212w...,
8184) = 1056
select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {58, 296000})
read(0, , 8184)   = 0
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, O\0\0\10rsync: connection unexpected..., 83) = -1 EPIPE (Broken
pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
write(2, rsync: writefd_unbuffered failed..., 87) = -1 EPIPE (Broken pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
rt_sigaction(SIGUSR1, {0x1, [], 0}, NULL, 8) = 0
rt_sigaction(SIGUSR2, {0x1, [], 0}, NULL, 8) = 0
write(2, rsync error: errors with program..., 83) = -1 EPIPE (Broken pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
rt_sigaction(SIGUSR1, {0x1, [], 0}, NULL, 8) = 0
rt_sigaction(SIGUSR2, {0x1, [], 0}, NULL, 8) = 0
gettimeofday({1259785360, 391955}, NULL) = 0
select(0, NULL, NULL, NULL, {0, 10}) = 0 (Timeout)
gettimeofday({1259785360, 492092}, NULL) = 0
exit_group(13)  = ?
Process 28645 detached


Backuppc shows the following error when it fails:

2009-12-02 14:17:58 full backup started for directory /; updating partial #4
2009-12-02 14:24:59 Aborting backup up after signal PIPE
2009-12-02 14:25:00 Got fatal error during xfer (aborted by signal=PIPE)


remote machine: rsync  version 3.0.6  protocol version 30
backuppc: rsync  version 3.0.6  protocol version 30

Both using ext3 filesystem

Donny B.

On Wed, Dec 2, 2009 at 2:18 PM, Les Mikesell lesmikes...@gmail.com wrote:

 Kameleon wrote:
  I have an issue with one of my backuppc servers. This server is used to
  only backup one server and has been doing flawless since it was
  installed about a week ago. Until last night
 
  I checked it this morning to find the above error in the host summary. I
  did some checking and even manually ran the backup with an strace on the
  process on the remote server. Every time it would fail. I had a similar
  issue on my home backuppc server on a few servers and switching them
  from rsync to rsyncd and changing the needed settings did the trick. Not
  so much with this one. Here are the last few lines of the log when doing
  both rsync and rsyncd:
 
  via rsync:
 
  2009-12-02 10:18:04 full backup started for directory /; updating partial
 #4
  2009-12-02 10:25:09 Aborting backup up after signal PIPE
  2009-12-02 10:25:10 Got fatal error during xfer (aborted by signal=PIPE)
  2009-12-02 10:25:10 Saved partial dump 4
 
  Via rsyncd:
 
  2009-12-02 11:09:00 full backup started for directory main; updating
  partial #4
  2009-12-02 11:15:01 Aborting backup up after signal PIPE
  2009-12-02 11:15:02 Got fatal error during xfer (aborted by signal=PIPE)
 
  I am at a loss. Something is causing the remote end to fail. I did a
  little research and found that if doing backup via rsync and ssh it
  could be a large file causing it, as in over 2GB. I looked in the
  directories being backed up and there are a few files larger than 2GB
  that are being backed up. Is there a workaround for this?

 It might help if you mentioned the OS involved.  Cygwin/windows should
 be the only thing with low size limits and I think that might be version
 specific.

 --
   Les Mikesell
 lesmikes...@gmail.com


 --
 Join us December 9, 2009 for the Red Hat Virtual Experience,
 a free event focused on virtualization and cloud computing.
 Attend in-depth sessions from your desk. Your couch. Anywhere.
 http://p.sf.net/sfu/redhat-sfdev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net

Re: [BackupPC-users] Got fatal error during xfer

2009-12-02 Thread Kameleon
I did some more testing watching top and such on both backuppc server and
the client. Both had plenty of memory and such. The thing we are thinking is
possibly the cable or switch between the two. Tomorrow we plan to relocate
the backuppc server from it's current location and plug it directly in via
crossover cable to the server. That will eliminate the networking gear all
except the network intercafe cards.

Thanks for the input.



On Wed, Dec 2, 2009 at 3:10 PM, Les Mikesell lesmikes...@gmail.com wrote:

 Kameleon wrote:
  I do apologize. The backuppc server is Ubuntu 9.10 and the server being
  backed up is Centos 5.4. I have changed everything back to rsync and
  tried a manual full backup (since that is what it was attempting to do
  when it failed) I ran strace on the PID of rsync on the remote server
  being backed up. The last few lines of the output are below.
 [...]
   read(0,
 
 \256\374O\362\350\224\30\3101(Y\8\3279z\300nt\10*\367\26+\355\364\245W)/\224\301...,
  8184) = 8184
  select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {60, 0})
  read(0,
 
 \314\326\4\242P\345\3\332\245b\317\363\4\253'\307\3056Y\307X\313\364I\5\3746\fH\340\212w...,
  8184) = 1056
  select(1, [0], [], NULL, {60, 0})   = 1 (in [0], left {58, 296000})
  read(0, , 8184)   = 0
  select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
  write(1, O\0\0\10rsync: connection unexpected..., 83) = -1 EPIPE
  (Broken pipe)

 Looks like something is wrong on the target side, dropping the
 connection.  File system problems?  Out of memory?  Are both machines on
 the same LAN or could there be a problem with networking equipment
 between them?


  Backuppc shows the following error when it fails:
 
  2009-12-02 14:17:58 full backup started for directory /; updating partial
 #4
  2009-12-02 14:24:59 Aborting backup up after signal PIPE
  2009-12-02 14:25:00 Got fatal error during xfer (aborted by signal=PIPE)

 This doesn't tell you anything except that the other end died.


  remote machine: rsync  version 3.0.6  protocol version 30
  backuppc: rsync  version 3.0.6  protocol version 30

 Backuppc doesn't use the rsync binary on the server side - it has its
 own implementation in perl.  But it looks like things started OK and
 then either the remote side quite or the network connection had a problem.


 --
Les Mikesell
lesmikes...@gmail.com


 --
 Join us December 9, 2009 for the Red Hat Virtual Experience,
 a free event focused on virtualization and cloud computing.
 Attend in-depth sessions from your desk. Your couch. Anywhere.
 http://p.sf.net/sfu/redhat-sfdev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with backuppc

2009-11-18 Thread Kameleon
I am sure there are others that will chime in on this but as I see it you
have a few options.

1. Setup LVM and use the external disk as a permanent addition to the system
2. Mount the external disk as the directory that will house your desktops
backups

Honestly, I would be wary about using an external USB disk. Alot of them
have power saving features that will power it down after a short period and
could cause issues with your backups. I would invest in another internal
drive or even mount via NFS or iSCSI another drive in a separate machine.



On Wed, Nov 18, 2009 at 6:05 AM, KOUAO aketchi aketc...@yahoo.fr wrote:

 Hello,

 We have a server running under debian/sarge with backuppc. All think is
 right , but my storage  space's  utilization exceeds now 95% required. Also
 , we want to back up the pc workstations on an usb extern disk with
 Backuppc. The problem is how to configure the pc config.file in order to put
 all the backup data on this external disk. The disks in  the server will be
 used for the backup of the servers only in order to reduce this percentage.
 Thanks a lot four your help




 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
 trial. Simplify your report design, integration and deployment - and focus
 on
 what you do best, core application coding. Discover what's new with
 Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc with Xen

2009-11-16 Thread Kameleon
I think I found my issue. It is not a XEN deal, nor is it a backuppc issue
(I don't think at least). I noticed that I was using rsyncd as opposed to
rsync on the clients that were being backed up successfully. So I setup
rsyncd on the 2 clients that were having issues. The first one did a proper
incremental in 3min 11seconds but didn't have much data to transfer. The
other took 42min 5seconds but had substantially more to transfer.

Point is though... it works with rsyncd instead of regular rsync. Any ideas
why? Maybe there is a difference with the way xen paravirt domu's are setup
as the one physical ubuntu machine is still using rsync.



On Sun, Nov 15, 2009 at 8:09 PM, Kameleon kameleo...@gmail.com wrote:

 I upgraded to ubuntu 8.10 on the backuppc domu and the same issue
 remains. It went from backuppc ver 3.0.0 to 3.1.0. I will test a
 64-bit ubuntu domu backup here soon and report the results. I find it
 odd though that it backs up physical machines and also hvm domu
 machines fine. I don't think it is a xen issue because it is backing
 up the hvm domu and it can't know the difference as it appears as a
 physical machine.

 Thanks for the input.

 On 11/15/09, Gerald Brandt g...@majentis.com wrote:
  I have backuppc running on 32 bit Ubuntu 8.10 under Citrix XenServer, and
 I
  don't see these issues. All my DomU systems are running off of the same
  ISCSI array on a 1 GB ethernet.
 
  With having backuppc in deb format, it's easy enough to check if you have
 a
  32 vs 64 bit issue. I'd also recommend you go at least 8.10, since it has
 a
  newer version of backuppc.
 
  Gerald
 
  - Kameleon kameleo...@gmail.com wrote:
  Hello,
 
  I have installed backuppc on an ubuntu 8.04 xen 64-bit domu and I must
 say
  I like it. It was installed it via the aptitude install backuppc
  command. I have setup a few other machines for it to back up and am
 liking
  what I see. I have a slight problem however. When I backup a physical
  machine, or wondows virtual machine, it does fine. But when I backup
  another Xen para-virtualized domu it takes FOREVER and finally errors
 out
  with backup failed (aborted by signal=ALRM). Here is my systems and
  basic specs:
 
  virtual server: XEN dom0 running with all domu's as LVM, 64-bit.
  backuppc: running on Ubuntu 8.04 64-bit domu with 2 cpu cores available
  (2.4GHZ) and 1GB RAM
  laptop: physical laptop running windows vista 32-bit. Using the
 standalond
  cygwin rsyncd as supplied on the backuppc downloads page.
  Windows 2003 server: Running as a HVM 32-bit domu on the virtual server.
  Using same cygwin rsyncd as above.
  machine1: Ubuntu 8.04 32-bit physical machine. using rsync to backup
  machine2: Ubuntu 8.04 32-bit XEN domu. Using rsync to backup
  machine3: Ubuntu 8.04 32-bit XEN domu. Using rsync to backup
 
  The only machines that do backup properly are listed below with the
  approximate time taken to run an incrimental and amount of data:
 
  win2k3 server takes approx 45 min to run an incrimental. (8G data, 1.7G
  incrimental)
  Machine1 takes approx 25 min to run an incrimental. (2.8G data total,
  0.9GB incrimental)
  laptop takes approx 25 min to run an incrimental. (about 63G data total,
  500-600M incrimental)
 
  I have tried everything I can think of to get the 2 para-virt domu's
  backed up. I was eventually able to get a good FULL backup of them but
  that took hours. The incrimental on machine2 started about 11 hours ago
  and has still not completed. It will keep going until I stop it or it
  errors out as above. I canceled backups on machine3 due to taking for
  ever.
 
  I was thinking it may have something to do with the ClientCharSet
  variable so I ran the locale charmap and came up with ANSI_X3.4-1968
 on
  the 2 machines that are not backing up properly. None of the others have
  anything set for that variable and I have tried putting that
  ANSI_X3.4-1968 and leaving it blank.
 
  I was also debating trying to figure out a way to run an LVM snapshot of
  the domu's on the dom0, mount them to be readable, and have backuppc run
 a
  backup against that then remove the lvm snapshot. But I would rather it
  just work the way it is designed to.
 
  Does anyone have an idea of what could be happening here? I would love
 to
  have proper backups of my data especially since one of the machines
  not backing up is my web server! If you need more information I will
  provide any I can. Any help is appreciated.
 
  Donny
 
 
 --
  Let Crystal Reports handle the reporting - Free Crystal Reports 2008
  30-Day trial. Simplify your report design, integration and deployment -
  and focus on what you do best, core application coding. Discover what's
  new with Crystal Reports now. http://p.sf.net/sfu/bobj-july
  ___ BackupPC-users mailing
  list BackupPC-users@lists.sourceforge.net List:
  https://lists.sourceforge.net

[BackupPC-users] Backuppc with Xen

2009-11-15 Thread Kameleon
Hello,

 I have installed backuppc on an ubuntu 8.04 xen 64-bit domu and I must
say I like it. It was installed it via the aptitude install backuppc
command. I have setup a few other machines for it to back up and am liking
what I see. I have a slight problem however. When I backup a physical
machine, or wondows virtual machine, it does fine. But when I backup another
Xen para-virtualized domu it takes FOREVER and finally errors out with
backup failed (aborted by signal=ALRM). Here is my systems and basic
specs:

virtual server: XEN dom0 running with all domu's as LVM, 64-bit.
backuppc: running on Ubuntu 8.04 64-bit domu with 2 cpu cores available
(2.4GHZ) and 1GB RAM
laptop: physical laptop running windows vista 32-bit. Using the standalond
cygwin rsyncd as supplied on the backuppc downloads page.
Windows 2003 server: Running as a HVM 32-bit domu on the virtual server.
Using same cygwin rsyncd as above.
machine1: Ubuntu 8.04 32-bit physical machine. using rsync to backup
machine2: Ubuntu 8.04 32-bit XEN domu. Using rsync to backup
machine3: Ubuntu 8.04 32-bit XEN domu. Using rsync to backup

The only machines that do backup properly are listed below with the
approximate time taken to run an incrimental and amount of data:

win2k3 server takes approx 45 min to run an incrimental. (8G data, 1.7G
incrimental)
Machine1 takes approx 25 min to run an incrimental. (2.8G data total, 0.9GB
incrimental)
laptop takes approx 25 min to run an incrimental. (about 63G data total,
500-600M incrimental)

I have tried everything I can think of to get the 2 para-virt domu's backed
up. I was eventually able to get a good FULL backup of them but that took
hours. The incrimental on machine2 started about 11 hours ago and has still
not completed. It will keep going until I stop it or it errors out as above.
I canceled backups on machine3 due to taking for ever.

I was thinking it may have something to do with the ClientCharSet variable
so I ran the locale charmap and came up with ANSI_X3.4-1968 on the 2
machines that are not backing up properly. None of the others have anything
set for that variable and I have tried putting that ANSI_X3.4-1968 and
leaving it blank.

I was also debating trying to figure out a way to run an LVM snapshot of the
domu's on the dom0, mount them to be readable, and have backuppc run a
backup against that then remove the lvm snapshot. But I would rather it just
work the way it is designed to.

Does anyone have an idea of what could be happening here? I would love to
have proper backups of my data especially since one of the machines not
backing up is my web server! If you need more information I will provide any
I can. Any help is appreciated.

Donny
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc with Xen

2009-11-15 Thread Kameleon
I upgraded to ubuntu 8.10 on the backuppc domu and the same issue
remains. It went from backuppc ver 3.0.0 to 3.1.0. I will test a
64-bit ubuntu domu backup here soon and report the results. I find it
odd though that it backs up physical machines and also hvm domu
machines fine. I don't think it is a xen issue because it is backing
up the hvm domu and it can't know the difference as it appears as a
physical machine.

Thanks for the input.

On 11/15/09, Gerald Brandt g...@majentis.com wrote:
 I have backuppc running on 32 bit Ubuntu 8.10 under Citrix XenServer, and I
 don't see these issues. All my DomU systems are running off of the same
 ISCSI array on a 1 GB ethernet.

 With having backuppc in deb format, it's easy enough to check if you have a
 32 vs 64 bit issue. I'd also recommend you go at least 8.10, since it has a
 newer version of backuppc.

 Gerald

 - Kameleon kameleo...@gmail.com wrote:
 Hello,

 I have installed backuppc on an ubuntu 8.04 xen 64-bit domu and I must say
 I like it. It was installed it via the aptitude install backuppc
 command. I have setup a few other machines for it to back up and am liking
 what I see. I have a slight problem however. When I backup a physical
 machine, or wondows virtual machine, it does fine. But when I backup
 another Xen para-virtualized domu it takes FOREVER and finally errors out
 with backup failed (aborted by signal=ALRM). Here is my systems and
 basic specs:

 virtual server: XEN dom0 running with all domu's as LVM, 64-bit.
 backuppc: running on Ubuntu 8.04 64-bit domu with 2 cpu cores available
 (2.4GHZ) and 1GB RAM
 laptop: physical laptop running windows vista 32-bit. Using the standalond
 cygwin rsyncd as supplied on the backuppc downloads page.
 Windows 2003 server: Running as a HVM 32-bit domu on the virtual server.
 Using same cygwin rsyncd as above.
 machine1: Ubuntu 8.04 32-bit physical machine. using rsync to backup
 machine2: Ubuntu 8.04 32-bit XEN domu. Using rsync to backup
 machine3: Ubuntu 8.04 32-bit XEN domu. Using rsync to backup

 The only machines that do backup properly are listed below with the
 approximate time taken to run an incrimental and amount of data:

 win2k3 server takes approx 45 min to run an incrimental. (8G data, 1.7G
 incrimental)
 Machine1 takes approx 25 min to run an incrimental. (2.8G data total,
 0.9GB incrimental)
 laptop takes approx 25 min to run an incrimental. (about 63G data total,
 500-600M incrimental)

 I have tried everything I can think of to get the 2 para-virt domu's
 backed up. I was eventually able to get a good FULL backup of them but
 that took hours. The incrimental on machine2 started about 11 hours ago
 and has still not completed. It will keep going until I stop it or it
 errors out as above. I canceled backups on machine3 due to taking for
 ever.

 I was thinking it may have something to do with the ClientCharSet
 variable so I ran the locale charmap and came up with ANSI_X3.4-1968 on
 the 2 machines that are not backing up properly. None of the others have
 anything set for that variable and I have tried putting that
 ANSI_X3.4-1968 and leaving it blank.

 I was also debating trying to figure out a way to run an LVM snapshot of
 the domu's on the dom0, mount them to be readable, and have backuppc run a
 backup against that then remove the lvm snapshot. But I would rather it
 just work the way it is designed to.

 Does anyone have an idea of what could be happening here? I would love to
 have proper backups of my data especially since one of the machines
 not backing up is my web server! If you need more information I will
 provide any I can. Any help is appreciated.

 Donny

 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008
 30-Day trial. Simplify your report design, integration and deployment -
 and focus on what you do best, core application coding. Discover what's
 new with Crystal Reports now. http://p.sf.net/sfu/bobj-july
 ___ BackupPC-users mailing
 list BackupPC-users@lists.sourceforge.net List:
 https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:
 http://backuppc.wiki.sourceforge.net Project:
 http://backuppc.sourceforge.net/

-- 
Sent from my mobile device

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/