Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread Chris Robertson
Ralf Gross wrote:
 Gerald Brandt schrieb: 
   
 You may want to look at this thread 
 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg17234.html
  
 

 I've seen this thread, but the pool sizes there are max. in the lower
 TB region.

 Ralf
   

Not all of them...

http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg17240.html

Chris


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread Chris Robertson
Chris Robertson wrote:
 Ralf Gross wrote:
   
 Gerald Brandt schrieb: 
   
 
 You may want to look at this thread 
 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg17234.html
  
 
   
 I've seen this thread, but the pool sizes there are max. in the lower
 TB region.

 Ralf
   
 

 Not all of them...

 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg17240.html

 Chris

Sorry for the noise...  I was looking at the size of the full backups, 
not the pool.  On a side note, that's some serious compression, 
de-duplication, or a massive problem with the pool.


Chris


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using BackupPC to backup virtual hard discs

2010-02-19 Thread Chris Robertson
Richard Shaw wrote:
 On Fri, Feb 19, 2010 at 11:39 AM, Mike Bydalek
 mbyda...@compunetconsulting.com wrote:
   
 On Fri, Feb 19, 2010 at 10:11 AM, John Moorhouse
 john.moorho...@3jays.me.uk wrote:
 
 I'm happily using backupPC to backup a number of machine within our home 
 network, I'm wondering what will happen if I use it to backup the file on 
 the host machine that is the virtual disc for a number of virtual box VM, 
 will it have to backup the whole file each run (they are rather large) or 
 is it capable of only copying those bits of the file that have changed ?

 Basically I'm after suggestions of 'best practice' in this situation
   
 An image is exactly that, a binary image.  The best thing would be use
 BackupPC to connect to the actually Virtual Host and back that up.
 

 I think the OP understands that the first backup would be the whole
 virtual disk file, what he's asking is will subsequent incremental
 backups be a diff of the first.

 Richard
   

The answer here is (assuming an rsync transfer method) the difference 
will be transfered, but a separate copy of the whole image will be stored.

Chris


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-16 Thread Chris Robertson
Ralf Gross wrote:
 Hi,

 I'm faced with the growing storage demands in my department. In the
 near future we will need several hundred TB. Mostly large files. ATM
 we already have 80 TB of data with gets backed up to tape.

 Providing the primary storage is not the big problem. My biggest
 concern is the backup of the data. One solution would be using a
 NetApp solution with snapshots. On the other hand is this a very
 expensive solution, the data will be written once, but then only read
 again. Short: it should be a cheap solution, but the data should be
 backed up. And it would be nice if we could abandon tape backups...

 My idea is to use some big RAID 6 arrays for the primary data, create
 LUNs in slices of max. 10 TB with XFS filesystems.

 Backuppc would be ideal for backup, because of the pool feature (we
 already use backuppc for a smaller amount of data).

 Has anyone experiences with backuppc and a pool size of 50 TB? I'm
 not sure how well this will work. I see that backuppc needs 45h to
 backup 3,2 TB of data right now, mostly small files.

 I don't like very large filesystems, but I don't see how this will
 scale with either multiple backuppc server and smaller filesystems
 (well, more than one server will be needed anyway, but I don't want to
 run 20 or more server...) or (if possible) with multiple backuppc
 instances on the same server, each with a own pool filesystem.

 So, anyone using backuppc in such an environment?
   

In one way, and compared to some my backup set is pretty small (pool is 
791.45GB).  In another dimension, I think it is one of the larger 
(comprising 20874602 files).  The breadth of my pool leads to...

-bash-3.2$ df -i /data/
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/drbd0   1932728448 47240613 18854878353% /data

...nearly 50 million inodes used (so somewhere close to 30 million hard 
links).  XFS holds up surprisingly well to this abuse*, but the strain 
shows.  Traversing the whole pool takes three days.  Attempting to grow 
my tail (the number of backups I keep) causes serious performance 
degradation as I approach 55 million inodes.

Just an anecdote to be aware of.

 Ralf

Chris

* I have recently taken my DRBD mirror off-line and copied the BackupPC 
directory structure to both XFS-without-DRBD and an EXT4 file system for 
testing.  Performance of the XFS file system was not much different 
with, or without DRBD (a fat fiber link helps there).  The first 
traversal of the pool on the EXT4 partition is about 66% through the 
pool traversal after about 96 hours.


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3 question

2010-02-09 Thread Chris Robertson
James Ward wrote:
 I'm trying to figure out how to do something in the GUI.

 I have the following exclude: /data0*

 Now I would like to add an exception to that rule and back 
 up: /data02/vodvendors/promo_items/

 Is it possible to set this up in the GUI?  I can't figure it out.

$Conf{BackupFilesExclude} behaves differently depending on the 
underlying transport program, rsync, smbclient or tar.  If you are using 
rsync as the backup method, I think you should be able to either add 
/data02/vodvendors/promo_items/ as an RsyncShareName, or add...

--include = /data02/vodvendors/promo_items/

...to your RsyncArgs.


 Ward... James Ward
 Tekco Management Group, LLC
 jew...@torzo.com mailto:jew...@torzo.com
 520-290-0910x268
 ICQ: 201663408

Chris


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Distro choice

2010-01-27 Thread Chris Robertson
Chris Baker wrote:
 I don't know what your level of expertise is. Please accept my apology if
 you already know this.

 The two particular distributions you mentioned are pretty different. Debian
 basically started as its own branch of distribtuion, and other distributions
 like Ubuntu and Mepis are derived from it. CentOS is derivative of RedHat.
   

To clarify, CentOS is not just a derivative of RedHat, but a binary 
compatible clone, with different trademarks.

 Most disributions come from one of these two branches.
   

Heh.  If you don't count Suse, Mandriva, Slackware, Arch...  Perhaps I'm 
just being argumentative.  If so, I apologize.

 For me, the top virtue of Debian is that it gives you a very basic install.
 You get the command line and a few packages. Then, you have to add
 everything else. What this means is that the latest version of Debian will
 actually work pretty well on a machine that has only 512 MB of memory.

 Package managers are different on both. CentOS uses the RPM (Redhat Package
 Manager), while Debian uses APT. I think APT works much better. Debian
 definitely resolves dependencies much better than others.
   

CentOS uses YUM (Yellowdog Updater, Modified) for package management.  
Saying CentOS uses RPM is like saying Debian uses dpkg.  While I would 
certainly agree that APT is a better package manager than RPM, I prefer 
YUM to APT.  Your mileage may vary.

 One other person mentioned Gentoo. It's got a very good reputation for being
 lean and efficient. Gentoo users love Gentoo. The downside is that the
 install compiles from source and take a long time. I don't have personal
 experience with Gentoo. If you are an experienced Linux user, go ahead and
 try Gentoo. If you are a beginning Linux user, I don't recommend it.

 More and more, it seems like trying on Linux distributions is like trying on
 new suits. Some fit for some people, others fit for other people.

 Chris Baker -- cba...@intera.com
 systems administrator
 INTERA -- 512-425-2006

Chris


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] MakeFileLink

2010-01-26 Thread Chris Robertson
Huw Wyn Jones wrote:
 Hi folks,

 I'm trying to recover a BackupPC server which is crashing. The system is 
 throwing kernel panics on normal boot up. I can get the system up only when I 
 log-in interactively and turn off all services. It looks like a software 
 issue rather than hardware, but TBH I'm only just starting my investigation.

 First unexplained thing I came across was in the /var/log/BackupPC/ 
 directory. The logs are reporting

 BackupPC_link got error -4 when calling MakeFileLink

 What's error -4? Googling didn't come up with a satisfactory answer.
   

http://www.adsm.org/lists/html/BackupPC-users/2008-12/msg00422.html

 Thanks for your attention

 Huw
   

Chris



--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Comments on this backup plan please

2010-01-26 Thread Chris Robertson
Tino Schwarze wrote:
 Hi,

 On Tue, Jan 26, 2010 at 12:22:45PM -, PD Support wrote:

   
 We are going to be backing up around 30 MS-SQL server databases via ADSL to
 a number of regional servers running CentOS (about 6 databases per backup
 server). 10 sites are 'live' as of now and this is how we have started...

 The backups are between about 800MB and 5GB at the moment and are made as
 follows:

 1) A stored procedure dumps the database to SiteName_DayofWeek.bak eg:
 SHR_Mon.bak

 2) We create a local ZIP copy eg: !BSHR_Mon.zip. The !B means the file is
 EXCLUDED from backing up and is just kept as a local copy, cycled on a
 weekly basis.

 3) We rename SHR_DayofWeek.bak to SiteName.bak

 4) We split the .bak file into 200MB parts (.part1 .part2 etc.) and these
 are synced to the backup server via backuppc

 This gives us a generically-named daily backup that we sync
 (backupPC/rsyncd) up to the backup server nightly.

 We split the files so that if there is a comms glitch during the backing up
 of the large database file and we end up with a part backup, the next
 triggering of the backup doesn't have to start the large file again - only
 the missing/incomplete bits.

 Although the zip files are relatively small, we have found that their
 contents varies so much (bit-by-bit wise) on a  weekly cycle basis that they
 take a long time to sync so we leave them as local copies only.

 Seems to work OK at the mo anyway!
 

 You might want to try gzip --rsyncable instead of ZIP and see whether it
 makes a difference. Because of the file splitting etc. I'd add a .md5
 checksum file, just to be sure. Also, there is a tool which name I
 cannot remember which allows you to split a file and generate an
 additional error-correction file, so you get a bit of redundancy and
 chances are higher to reconstruct the archive, even if a part is lost.
   

http://www.quickpar.org.uk/CreatingPAR2Files.htm

I'm sure there is a command line version as well.

 Disabling compression in BackupPC for these hosts might speed things up
 since the files cannot be compressed anyway.

 Tino.
   

Chris


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Replacing dead backup drive

2010-01-26 Thread Chris Robertson
Dan Smisko wrote:
 Yes, only the backup data (pool, cpool, etc) was on the dead drive.  The 
 config is in /etc/BackupPC.
 I guess I will try to re-create the backup directories and try another 
 backup.

 Certainly a RAID is worth considering, but the next question is what to 
 put on a RAID drive.
 A dead root drive would not be much fun either.  My preference would be 
 software RAID
 for portability, but I don't know about performance.  Does software RAID 
 work on a Linux
 root drive?
   

Sure.  You can put /boot on a RAID1 and use any layer of software RAID 
for the rest.

 Thanks again for your help.

 Dan Smisko
   

Chris


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] server spec for BackupPC

2010-01-22 Thread Chris Robertson
Stuart Matthews wrote:
 Hi all,

 I am currently running BackupPC on the following:
 1.5GB RAM
 older processor - not sure how fast
 2TB external USB hard drive

 Clearly this isn't cutting it, although it was barely cutting it for a
 few months. This wasn't my optimal setup but I have to be cost conscious
 whenever I can, so I went with free, because I either had all this
 laying around or had it donated. The current bottlenecks seem to be hard
 drive and RAM, although CPU could be better, too.

 The environment I am trying to back up is 23 laptops, mostly OS X. They
 are generally only on the network from 10am - 5pm. The pool size is
 434.71GB comprising 4522859 files and 4369 directories. Our network is
 100Mbps.

 Here is the hardware I am considering buying:
 Shuttle barebones case, PSU.
 2x4GB DDR2 800 RAM. max 16GB on this motherboard over four slots.
 Western Digital Caviar Black WD2001FASS 2TB 7200 RPM SATA 3.0Gb/s 3.5
 Intel Core 2 Quad Q8400 Yorkfield 2.66GHz Quad-Core Processor

 The WD 2TB will be used as the BackupPC drive. I will use a random SATA
 drive that I have sitting around for the system drive. The OS will be
 the latest Debian Linux.

 Before I propose to my boss that we spend $1100 on new hardware, I
 wanted to get the opinion of the list as to whether this hardware should
 be sufficient for, say, five years to come, assuming no hardware
 failures.

So you say that your current limitation is currently disk (and possibly 
RAM), but you are spending $1,100 mostly on CPU (and RAM).  Personally, 
I'd take that money and buy a hardware RAID controller and as many disks 
(in the $/GB sweet spot) as I could to use with my current set-up.

Assuming your current computer does not have a PCI-Express slot...

4 Port 3Ware SATA Raid card* - $310
http://www.newegg.com/Product/Product.aspx?Item=N82E16816116052

If you DO have a PCI-Express slot...

4 Port 3Ware SATA Raid card - $275
http://www.newegg.com/Product/Product.aspx?Item=N82E16816116076

In either case...

1.5 TB SATA Drives - $145 each ( $725 for 5, so you have a spare)
http://www.newegg.com/Product/Product.aspx?Item=N82E16822148531

That leaves at least $65 for shipping and molex to SATA power adapters.  
Run the four drives in RAID 5 for maximum space, or RAID 10 for 
(possibly) better performance.

  If you don't think it is sufficient, what do you think will be
 sufficient so I can price that out and propose that to my boss instead?
 Thoughts?

 Thanks!
 Stuart Matthews

Chris

* PCI-X is backwards compatible with PCI, so this card will work in a 
PCI slot.


--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ssh don't work to backup localhost

2009-12-24 Thread Chris Robertson
Tony Schreiner wrote:
 On 12/23/2009 10:06 PM, Claude Gélinas wrote:
   
 Le mercredi 23 décembre 2009 21:33:50, Adam Goryachev a écrit :
   
 
 Les Mikesell wrote:
 
   
 No, it should be the same.  Look in the root/.ssh/authorized_keys file to
 see if the ssh-copy-id command put the right thing there.  And make sure
 the file and directories above have the right owner/permissions.   I've
 seen some versions that want to use a file named authorized_keys2 instead
 but I'm not sure exactly why.
 

If I recall correctly, Debian (and related distributions) made this 
change with the protocol change from SSH1 to SSH2.  It's a setting in 
the sshd_config file (AuthorizedKeysFile).

   
 
 You could just copy an authorized_keys file from a working machine to
 this one as well...

 Also, use ssh -v (or -vv) to see if it whether it tries to use the key
 etc...

 Also, check your server config files and log files and compare to other
 working machines...

 Regards,
 Adam

 
   
 I've copyed the authorized_keys from /root/.ssh from a working machine to 
 the 
 oligoextra machine in /boot/.ssh. permission are ok but still no luck.

 Need a password from backuppc user.

 I even try to setup another machine to login into oligoextra with the same 
 issue. need password.

 on a working machine the -vv option give

  debug1: Server accepts key: pkalg ssh-rsa blen 277

 on oilgoextra I don't have that line. look like key is not received or 
 accepted by oligoextra ???

   
 
 I forget if anybody has mentioned wrong file permissions as a
 possibility. The ~/.ssh directory may not be group or world
 writable.This will be logged in /var/log/messages if set incorrectly.

I'd start looking through the sshd_config (on Fedora, it should be in 
/etc/ssh/).  Check to make sure root logins are permitted 
(PermitRootLogin yes *or* PermitRootLogin without-password) and that 
public key authentication is allowed (PubkeyAuthentication yes).

Chris



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ssh don't work to backup localhost

2009-12-24 Thread Chris Robertson
Claude Gélinas wrote:
 Le mercredi 23 décembre 2009 22:14:41, Tony Schreiner a écrit :
   
 I forget if anybody has mentioned wrong file permissions as a
 possibility. The ~/.ssh directory may not be group or world
 writable.This will be logged in /var/log/messages if set incorrectly.

 ---
 

   
 This is what I have:

 drwxrwxr-x   2 root root  4096 déc 23 21:40 .ssh
 -rw--- 1 root root 413 déc 23 21:39 authorized_keys
 -rw-rw-r-- 1 root root 861 déc 20 22:29 known_hosts
   

That would seem to be a problem...

[r...@sas ~]# ls -ld .ssh/
drwx--  2 root root 4096 Dec 17 15:31 .ssh/
[r...@sas ~]# chmod 775 .ssh/
[r...@sas ~]# ssh localhost
Password: ^C
[r...@sas ~]# chmod 700 .ssh
[r...@sas ~]# ssh localhost
[r...@sas ~]#^D
Connection to localhost closed.
[r...@sas ~]#

Chris





--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ssh don't work to backup localhost

2009-12-23 Thread Chris Robertson
Matthias Meyer wrote:
 Claude Gélinas wrote:

   
 I'm trying to setup the backup of the localhost with backuppc. I already
 backup several other linux machine via ssh. I've setuped all them via
 running the following command as backuppc user:

 ssh-keygen -t dsa
 cd .ssh
 ssh-copy-id -i id_dsa.pub r...@oligoextra.phyto.qc.ca
 

 I would believe you must:
 ssh-copy-id -i id_dsa.pub backu...@oligoextra.phyto.qc.ca
 because you need the publich key in /var/lib/backuppc/.ssh/authorized_keys
 and not in /root/.ssh/authorized_keys
   

If he is trying to log in as root (and by the command ssh -l root -vv  
oligoextra, he is) the public key would need to be in 
/root/.ssh/authorized_keys.  My concern is the ssh-copy-id was run 
against r...@oligoextra.phyto.qc.ca, but the ssh attempt is being run 
against oligoextra.  Why the two host names?

 cat id_dsa.pub  /var/lib/backuppc/.ssh/authorized_keys
 should also do the job.

 br
 Matthias
   

Chris



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] where does BackupPC_zipCreate write the file before transmission

2009-12-23 Thread Chris Robertson
Matthias Meyer wrote:
 Hi,

 I assume BackupPC_zipCreate read the files from the numbered dump and write
 them local into a .zip file. This local .zip file will be transfered to the
 destination.
 I can't find out where this local .zip file is located.
 Is my assumption wrong?

 Does BackupPC_zipCreate write the .zip to stdout and it will be transfered
 by BackupPC?
   

Yes.  BackupPC_zipCreate opens STDOUT as a file handle.

 In this case, is it possible to run two different BackupPC_zipCreate in
 parallel?
   

That should work fine.

 Thanks
 Matthias
   

Chris


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd via ssh-redirected port

2009-12-22 Thread Chris Robertson
Guido Schmidt wrote:
 Matthias Meyer schrieb:
   
 Guido Schmidt wrote:

 
 Matthias Meyer wrote:
   
 Guido Schmidt wrote:
 
 What works? The opening and closing of the tunnel.
 What does not? The connection to it. Nothing in the rsyncd-logs on
 host.example.com.

 If I leave DumpPostUserCmd empty the tunnel stays open and I can use it
 with rsync as user backuppc on a shell providing the password by hand:

   rsync -av --list-only --port=32323 backu...@localhost::Alles
   /home/backuppc/test/

   
 Do you provide the password during your script?
 
 The ssh-connection works (authenticated via public key). The password I
 refered to is for connecting to rsyncd and that is stored in
 $Conf{RsyncdPasswd}.

 It seems that backuppc does not reach the point where it actually tries
 to connect to rsync daemon. There are no entries in the rsyncd-log
 (there are when I use the rsync-command above). How can I find out more
 what happens and what not?

   
 I don't really know what the problem :-(
 You can increase the loglevel with $Conf{XferLogLevel}.
 

 I already increased it to 6, but that didn't give any more details.

   
 What happens if you start your tunnel interactive and leave DumpPreUser as 
 well
 as CmdDumpPostUserCmd empty.
 

 Okay, we're getting closer. That way the backup worked.
 So I either get BackupPC to open the tunnel or to do the backup. That's odd.
   

I'd try giving an explicit exit value upon successful tunnel creation.

...
--- /usr/local/bin/sshtunnelcontrol.orig2009-12-22 
03:16:34.0 -0900
+++ /usr/local/bin/sshtunnelcontrol 2009-12-22 03:17:09.0 -0900
@@ -27,6 +27,9 @@
   if ! ps -ef|grep -E ^backuppc $PID ; then
 echo $PRG_NAME: Error: Tunnel does not exist
 exit 1
+  else
+echo $PRG_NAME: Info: Tunnel exists
+exit 0
   fi
 else
   echo $PRG_NAME: Error: ${PIDFILE} already exists.
...

   
 Why do you need the identification by rsync? I would believe you can trust 
 your
 ssh-tunnel and dont't need an additional authentication.
 

 There a users with shell-access to that host. Not protecting the port
 would give them read-access to the whole file-system.

 Guido
   

Chris


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] strange ssh error

2009-12-22 Thread Chris Robertson
Claude Gélinas wrote:
 I've setup a new backuppc server on my main workstation which is a FC12. 
 Everything look fine except I can't backup my workstation as ssh, keep asking 
 root password.

 I've followed the BackupPC FAQ: SSH Setup for this workstation and a remote 
 machine FC9. No problem with the remote machine which I can backup whitout 
 problem. But for the local machine, when I do 

 ssh -q -x -l root -v oligoextra,  as backuppc user

 it keep asking for a password for root.

 Here is what I get:

 ssh -q -x -l root -v oligoextra
 OpenSSH_5.2p1, OpenSSL 0.9.8k-fips 25 Mar 2009   
 debug1: Reading configuration data /etc/ssh/ssh_config   
 debug1: Applying options for *   
 debug1: Connecting to oligoextra [127.0.0.1] port 22.
 debug1: Connection established.  
 debug1: identity file /var/lib/BackupPC/.ssh/identity type -1
 debug1: identity file /var/lib/BackupPC/.ssh/id_rsa type 1   
 debug1: identity file /var/lib/BackupPC/.ssh/id_dsa type 2   
 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 
 debug1: match: OpenSSH_5.2 pat OpenSSH*  
 debug1: Enabling compatibility mode for protocol 2.0 
 debug1: Local version string SSH-2.0-OpenSSH_5.2 
 debug1: SSH2_MSG_KEXINIT sent
 debug1: SSH2_MSG_KEXINIT received
 debug1: kex: server-client aes128-ctr hmac-md5 none 
 debug1: kex: client-server aes128-ctr hmac-md5 none 
 debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(102410248192) sent 
 debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP  
 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
 debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY  
 debug1: Host 'oligoextra' is known and matches the RSA host key. 
 debug1: Found key in /var/lib/BackupPC/.ssh/known_hosts:2
 debug1: ssh_rsa_verify: signature correct
 debug1: SSH2_MSG_NEWKEYS sent
 debug1: expecting SSH2_MSG_NEWKEYS   
 debug1: SSH2_MSG_NEWKEYS received
 debug1: SSH2_MSG_SERVICE_REQUEST sent
 debug1: SSH2_MSG_SERVICE_ACCEPT received 
 debug1: Authentications that can continue: publickey,gssapi-with-mic,password
 debug1: Next authentication method: gssapi-with-mic
 debug1: Unspecified GSS failure.  Minor code may provide more information
 No credentials cache found

 debug1: Unspecified GSS failure.  Minor code may provide more information
 No credentials cache found

 debug1: Unspecified GSS failure.  Minor code may provide more information


 debug1: Next authentication method: publickey
 debug1: Trying private key: /var/lib/BackupPC/.ssh/identity
 debug1: Offering public key: /var/lib/BackupPC/.ssh/id_rsa
 debug1: Authentications that can continue: publickey,gssapi-with-mic,password
 debug1: Offering public key: /var/lib/BackupPC/.ssh/id_dsa
 debug1: Authentications that can continue: publickey,gssapi-with-mic,password
 debug1: Next authentication method: password
 r...@oligoextra's password:

 What can I do. Is there something different from FC9 to FC12 ???
   

Nothing that should prevent this from working.  Please verify that you 
have put the public key (associated with the BackupPC user's private 
key) in /root/.ssh/authorized_keys on your workstation.

Chris



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc only works from the command line

2009-12-16 Thread Chris Robertson
M. Sabath wrote:
 Hello all,

 I use backuppc on Debian 5.
 Since I upgraded from Debian 4 to Debian 5 backuppc doesn't run
 automatically.

 Our server runs only during daytime between 7am and 19 pm
   

Let me see if I have this right...  Your server is only powered on from 
7 am to 7 pm...

 From the command line all works fine.

 Using backuppc with Debian 4.0 all worked fine.

 What am I doing wrong?


 Thank you

 Markus


 --


 Here are some configuration entries of my config.pl which might be
 interesting:

 $Conf{WakeupSchedule} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
 15, 16, 17, 18, 19, 20, 21, 22, 23];


 $Conf{FullPeriod} = 6.97;
 $Conf{IncrPeriod} = 0.97;

 $Conf{FullKeepCnt} = [3,1,1];

 $Conf{BackupsDisable} = 0;

 $Conf{BlackoutPeriods} = [
 {
   hourBegin =  7.0,
   hourEnd   = 19.5,
   

...your blackout period covers that whole time...

   weekDays  = [1, 2, 3, 4, 5],
   

...at least on the week days, and you are wondering why BackupPC is not 
working automatically?

 },
 ];
   

Let me know if my understanding is not correct.  Otherwise, I'd suggest 
reading the fine manual: 
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_blackoutperiods_

Chris


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow link options

2009-12-16 Thread Chris Robertson
Kameleon wrote:
 I have a few remote sites I am wanting to backup using backuppc. 
 However, two are on slow DSL connections and the other 2 are on T1's. 
 I did some math and roughly figured that the DSL connections, having a 
 256k upload, could do approximately 108MB/hour of transfer. With these 
 clients having around 65GB each that would take FOREVER!!!

 I am able to take the backuppc server to 2 of the remote locations 
 (the DSL ones) and put it on the LAN with the server to be backed up 
 to get the initial full backup. What I am wondering is this: What do 
 others do with slow links like this? I need a full backup at least 
 weekly and incrimentals nightly. Is there an easy way around this?

The feasibility of this depends entirely on the rate of change of the 
backup data.  Once you get the initial full, rsync backups only transfer 
changes.  Have a look at the documentation 
(http://backuppc.sourceforge.net/faq/BackupPC.html#backup_basics) for 
more details.


 Thanks in advance.

Chris


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pools are showing data but clients have no backups

2009-12-15 Thread Chris Robertson
sabujp wrote:
 The problem seems to be that this file:


 #91;r...@gluster3 data_jsmith#93;# 
 /usr/local/BackupPC/bin/BackupPC_deleteBackup.sh -c data_jsmith -l
 /usr/local/BackupPC/bin/BackupPC_deleteBackup.sh#58; line 93#58; 
 /glfsdist/backuppc3/pc/data_jsmith/backups#58; No such file or directory


 Does not exist in any of the directories for the clients. What I do see in 
 those directories however is a backups.old and backups.new:


 #91;r...@gluster3 data_jsmith#93;# ls -l backups.*
 -rw-r- 1 root root 3919 Dec 14 14#58;20 backups.new
 -rw-r- 1 root root 3837 Dec 13 11#58;15 backups.old


 Is there some way to get backuppc to re-generate the backups file?

Run /usr/local/BackupPC/bin/BackupPC_fixupBackupSummary without any 
arguments as the user that the backuppc process runs as (is it really 
root, in your case?).

  Can I just copy the backups.new file to backups?
   

Better to copy the backups.old file, as it was known good.

Chris


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pools are showing data but clients have no backups

2009-12-15 Thread Chris Robertson
sabujp wrote:
 I ran an incremental and it completed but it's not being recorded in backups 
 but instead in backups.new. When does backups.new get copied to backups so 
 that the new incremental will show up on the webpage or what process causes 
 this to happen?

The file backups.new is written out by a function called BackupInfoWrite 
(which further uses a function called TextFileWrite), which is called 
both in BackupPC_dump, when a backup completes and in BackupPC_link, 
when the linking of the backup completes.  The file backups.new is 
written, verified and then renamed to backups.  If the verification 
fails, the rename does not occur.  I'd suggest checking for permissions 
issues or file system corruption.

Chris


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-15 Thread Chris Robertson
Robin Lee Powell wrote:
 On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
   
 Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
 
 Oh, I agree; in an ideal world, it wouldn't be an issue.  I'm
 afraid I don't live there.  :)
   
 none of us do, but you're having problems. We aren't. 
 

 How many of you are backing up trees as large as I am?  So far,
 everyone who has commented on the matter has said it's not even
 close.
   

For what it's worth, I'm certainly not backing up trees as large as 
yours (my largest host is approaching 100GB in 1.25 million files, which 
can take more than10 hours), but I do have a 50GB host backing up over 
satellite.  Barring network outages, my backups work quite reliably.

 The suggestion that your *software* is probably misconfigured in
 addition to the *hardware* being flakey makes a lot of sense to
 me. 
 

 Certainly possible, but if it is I genuinely have no idea where the
 misconfiguration might be.  Also note that only the incrementals
 seem to fail; the initial fulls ran Just Fine (tm).  One of them
 took 31 hours.
   

And I would imagine that data was flowing over the link the whole time.

My guess would be that your firewalls are set up to close inactive TCP 
sessions.  Try adding -o ServerAliveInterval=60 to your RsyncClientCmd 
(so it looks something like $sshPath -C -q -x -o ServerAliveInterval=60 
-l root $host $rsyncPath $argList+) and see if that solves your problem.

 For what it's worth, here's what a client strace says before things
 crack on one of my larger incrementals; commentary welcome.

 - --

 open([customer dir], O_RDONLY|O_NONBLOCK|O_DIRECTORY) = 3
 fstat(3, {st_mode=S_IFDIR|0777, st_size=3864, ...}) = 0
 fcntl(3, F_SETFD, FD_CLOEXEC)   = 0
 getdents(3, /* 12 entries */, 4096) = 616
 lstat([customer dir]/Sapphire_Pearl_Pendant_selector.jpg, 
 {st_mode=S_IFREG|0644, st_size=1363, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant_gallery.jpg, 
 {st_mode=S_IFREG|0644, st_size=5482, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant_shop_banner.jpg, 
 {st_mode=S_IFREG|0644, st_size=19358, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant_library.jpg, 
 {st_mode=S_IFREG|0644, st_size=2749, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant_badge.jpg, 
 {st_mode=S_IFREG|0644, st_size=8073, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant_browse.jpg, 
 {st_mode=S_IFREG|0644, st_size=2352, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant_display.jpg, 
 {st_mode=S_IFREG|0644, st_size=33957, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant_segment.jpg, 
 {st_mode=S_IFREG|0644, st_size=1152, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant.JPG, {st_mode=S_IFREG|0644, 
 st_size=88733, ...}) = 0
 lstat([customer dir]/Sapphire_Pearl_Pendant_market_banner.jpg, 
 {st_mode=S_IFREG|0644, st_size=21168, ...}) = 0
 getdents(3, /* 0 entries */, 4096)  = 0
 close(3)= 0
 gettimeofday({1260864378, 747386}, NULL) = 0
 gettimeofday({1260864378, 747429}, NULL) = 0
 mmap(NULL, 20398080, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) 
 = 0x2ba2eb836000
 munmap(0x2ba2eb836000, 20398080)= 0
 select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
 write(1, 
 \217\2\0\7badge.jpgO/\0\0g\272\341I:3\17shop_banner.jpg\334\332\0\0h\272\341I:3\nbrowse.jpg\10\16\0\0i\272\341I:3\vlibrary.jpg\\\17\0\0h\272\341I:3\fselector.jpg\16\10\0\0j\272\341I:3\21market_banner.jpg\312\346\0\0f\272\341I:2\4.jpgx\255\7\0e\272\341I:2\f_display.jpg\r\235\0\0g\272\341I:3\vsegment.jpgk\6\0\0j\272\341I:$$9642/Banner_-BODIE-Piano_display.jpg\3447\0\0jc\342I\272=\tbadge.jpg\211\30\0\0\272=\vsegment.jpgn\4\0\0\272?\nlector.jpg\5\0\0:\16hop_banner.jpg\226d\0\0-$\270J:=\nbrowse.jpgs\10\0\0jc\342I\272=\vlibrary.jpgO\10\0\0\272\4.jpg\342p\0\0\272\f_gallery.jpg.\22\0\0\272=\21market_banner.jpg\25\203\0\0:$(2987/sapphire_pearl_pendant_selector.jpgs\5\0\0\344~\341i\...@\vgallery.jpgj\25\0\0:@\17shop_banner.jpg\236k\0\0\343~\341i\...@\vlibrary.jpg\275\n\0\0:@\tbadge.jpg\211\37\0\0\342~\341I:A\trowse.jpg0\t\0\0\344~\341I:@\vdisplay.jpg\245\204\0\0\343~\341I:@\vsegment.jpg\200\4\0\0\344~\341I:?\4.JPG\235Z\1\0\342~\341I\272?\22_market_banner.jpg\260R\0\0\0\0\0\0\0,
  659) = 659
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], 

Re: [BackupPC-users] making a mirror image of the backup pc disk

2009-12-09 Thread Chris Robertson
Pat Rice wrote:
 HI all
 Well at the moment I an recovering from a flooding situation.
 I had my office flooded to 2.5ft of water. Luckily the Backup server
 (backup pc) was above the water line and also my hard drive for my
 backup server. Unfortunately my machines that were on the ground, were
 not so lucky. I have spent one or two days pouring water out of
 machines, not a pleasant sight

 I have a new hard drive as I am worried about the hard drive that was
 already in the system, being exposed to the damp/water etc.

 So I want to try and get the data off it, and reconnect it the server.
 The Ideal situation would be a rsync copy of the server at a different
 location (anyone have this done ?).

 I had it set up as a LVM so /lib/backuppc/backups

 What I would like to know, or if any on had any experience of:
 Making a mirror or the backup disk:
 Should I do a dd?
 or would a copy be sufficient ?
 or will I have to worry about hard links that need to be kept ?


 Or should I just bite the bullet and put in a rsync server and take my
 chances with the disk?
 Any advice would be greatly received.

 Thanks in advance
 Pat
   

See the FAQ section Copying the pool under the header Other 
installation topics 
http://backuppc.sourceforge.net/faq/BackupPC.html#other_installation_topics

Chris


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exluding specific files within a directory tree

2009-12-09 Thread Chris Robertson
ckandreou wrote:
 I have the following files
 /cmroot/ems_src/view/2010_emsmadd.vws/.pid
 /cmroot/ems_src/view/2010_deva.vws/.pid
 /cmroot/ems_src/view/emsadmcm_01.03.006.vws/.pid
 /ccdev10/cmroot/ems_src/vob/mems.vbs/.pid

 I would like backuppc to exclude .pid 

 I used the following exclude line with  host.pl 
   '--exclude=/cmroot/ems_src/view/*/.pid',
   

Better would be using the built in configuration parameter 
$Conf{BackupFilesExclude} 
(http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupfilesexclude_)

 I would appreciate if someone could confirm that is correct. If not, any 
 advice on how to achieve it, would be great.
   

$Conf{BackupFilesExclude} = '*/.pid';

Should match a file named .pid in any directory (at least if using 
rsync, rsyncd or tar.  I'm a bit fuzzy on the SMB matching).

 Thanks in advance.

 Chris.

Chris


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] OT: (e.g.) sed command to modify configuration file

2009-09-28 Thread Chris Robertson
Timothy J Massey wrote:
 Hello!

 I have a shell script that I use to install BackupPC.  It takes a standard 
 CentOS installation and performs the configuration that I would normally 
 do to install BackupPC.  There are probably way better ways of doing this, 
 but this is the way I've chosen.

 As part of this script, I use sed to modify certain configuration files. 

Why modify, when you can replace?


cat  /etc/ssh/sshd_config  EOF
# This is my sshd_config.
# There are many like it, but this one is mine...
Protocol 2
PermitRootLogin no
EOF


Be aware, this is not a complete list of options.  egrep -v (^#|^$) 
/etc/ssh/sshd_config (before you run the above cat!) is more likely to be.

Chris


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and DRBD - My experience so far

2009-09-17 Thread Chris Robertson
Ian Levesque wrote:
 On Sep 15, 2009, at 7:12 PM, Chris Robertson wrote:

   
 ...even though they have more than a mile of physical separation.  I
 don't currently have good data as to the bandwidth utilization during
 backups (the DRBD config is set to limit it to 10M, which is about
 110Mbit/sec with TCP overhead), but the BackupPC_nightly and
 BackupPC_trashclean give an average 5Mbit/sec combined.  Over a 24  
 hour
 period the servers have passed nearly 80GB of data between them (78GB
 from the source, 2GB from the target).

 There has been no discernible effect to the amount of time it takes to
 backup my hosts.

 If you have any questions, or feel there is anything I was not clear
 about, feel free to ask.
 

 Thanks for your report. With that fiber link, it's no wonder you get  
 LAN-like results. What I'm curious about is if the IO requirements of  
 BackupPC would allow for an offsite replication over a typical WAN  
 link (say, 20-30ms round-trip).

The future plan is to put the replication server in Seattle, WA (about 
30ms round trip as the bits fly).  If I find some time int he next week 
or so, I'll see about setting up a FreeBSD box with a dummynet bridge to 
test the effect of latency and packet loss on my replication.

Chris


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Setting up a new BackupPC server

2009-09-16 Thread Chris Robertson
Jim Leonard wrote:
 James Ward wrote:
   
 I forgot to mention there are 16 disks in the big array.  So you'd 
 recommend RAID5 or 6?
 

 I'd recommend RAID1+0 actually (RAID10) if you have that many disks. 
 You'll have half the available disk space, but the speed of a stripe. 
 Plus you can lose up to half the disks and still have access to your data.
   

As long as you lose the RIGHT half.  Pessimistically, you can have a 
total failure from losing only two disks.

Chris


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and DRBD - My experience so far

2009-09-16 Thread Chris Robertson
dan wrote:

 On Wed, Sep 16, 2009 at 3:24 AM, Tino Schwarze backuppc.li...@tisc.de 
 mailto:backuppc.li...@tisc.de wrote:

 On Tue, Sep 15, 2009 at 03:12:28PM -0800, Chris Robertson wrote:

  In short, it works for me.

 [...]

 Wow, thanks for sharing your experience. I figure that DRBD is a nice
 way to RAID-1 across multiple hosts for failover purposes. I didn't
 expect it to perform that well - I'll look into it for Samba/NFS
 backup
 server... (It nicely integrates with hearbeat BTW)


 drdb (or AoE or iSCSI really) are great over local networks.  latency 
 is usually pretty low and they are all low-overhead.

 reading  this post, he is just putting the xfs journal on drdb.

Read it again.  :o)  Both the external XFS journal (logdev=/dev/drbd1) 
AND the data partition (/dev/drbd0) are DRBD mirrored.  It would be 
silly to have only one or the other saved in a DR scenario.

   All this really does is save the disk heads for doing some seeks 
 which will lower the average latency of the drive somewhat because 
 there is less travel(some sites online say up to 30% write improvement 
 but that was surely a special workload).  You could do this to a local 
 drive also.  Since this is just the journal, the only concern here is 
 having the system go down and losing the journal device at the same time.

 FYI, you can put the journal on a flash key as well as the journal 
 doesnt take that much space.

I would not recommend doing this.  My journal has seen 229GB of writes 
since I started using DRBD (about a month) but only 260MB of reads.  So 
while my journal is small enough (128MB*) to fit on pretty much any USB 
key, it is written to extensively, and read from seldomly.  Not a very 
good usage situation for flash.

   the journal gets overwritten once the filesystem confirms the write 
 so you really only have to store as much data as the filesystem has 
 queue up to write.  really a 1GB journal is way overkill but a 1GB 
 flash drive is cheap.  Also, the journal doesnt have to be smoking 
 fast so a flashdrive that is readyboost capable(only a measurement of 
 speed) should be fine.

 I think that you have to make the journal device during filesystem 
 creation but google might tell you that you can add an external 
 journal later, im not 100% sure on this.

It is possible to relocate the journal after-the-fact.  It's not 
possible to re-integrate it.

Chris

* [r...@archive-1 ~]# xfs_info /data
meta-data=/dev/drbd0 isize=256agcount=32, 
agsize=25165760 blks
 =   sectsz=512   attr=2
data =   bsize=4096   blocks=805303520, imaxpct=15
 =   sunit=64 swidth=832 blks, unwritten=1
naming   =version 2  bsize=4096 
log  =external   bsize=4096   blocks=32768, version=2
 =   sectsz=512   sunit=64 blks, lazy-count=0
realtime =none   extsz=4096   blocks=0, rtextents=0


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and DRBD - My experience so far

2009-09-16 Thread Chris Robertson
Les Mikesell wrote:
 Chris Robertson wrote:
   
 Read it again.  :o)  Both the external XFS journal (logdev=/dev/drbd1) 
 AND the data partition (/dev/drbd0) are DRBD mirrored.  It would be 
 silly to have only one or the other saved in a DR scenario.
 

 Have you investigated/tested what happens if one end or the other 
 crashes or they lose connectivity for some period of time?   I think 
 drbd has some tricks to deal with that but I don't know how well they 
 work in practice.
   

Well, I ran the BackupPC server without a partner for more than 2 weeks 
without a hitch...

Tell you what.  Let me just run downstairs and unplug the network cable 
from the mirror server.  :o)

I'll send another note in a couple hours with an account of what 
happens, then I'll go re-plug it in and report further.

Chris


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] question about email reminders -- custom to address?

2009-09-16 Thread Chris Robertson
backu...@omidia.com wrote:
 So I have a question about email reminders.

 I don't see a way to customize who the emails are sent to, short of
 changing the usernames.  (I see a way to customize the domain, with
 $Conf{EMailUserDestDomain} = '';, but that's it.)

 But I have a user with username bill, his email address is
 billmac...@domain1.com.  I have another user with username jason, his
 email address is jasonmackint...@domain2.com.

 I'm a little suprised that I can't simply enter an email address in the
 hosts config file, and have it use that!

 I'm running backuppc 2.1.2pl0 on this particular host, but 3.0 on most of
 the other hosts i administer, and am considering upgrading this one, but
 regardless, I haven't found an option in any of the documentation!
   

On 3.1 at least, you can allow more than one login to a particular 
host.  User will receive the emails, moreUsers have access, but will not 
receive emails.  It sounds to me that you should set billmackay as the 
user for the first server and add bill as a moreUsers.

http://backuppc.sourceforge.net/faq/BackupPC.html#step_4__setting_up_the_hosts_file

 Any help would be appreciated, thanks!

Chris


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and DRBD - My experience so far

2009-09-16 Thread Chris Robertson
Les Mikesell wrote:
 Chris Robertson wrote:
   
 Les Mikesell wrote:
 
 Chris Robertson wrote:
   
   
 Read it again.  :o)  Both the external XFS journal (logdev=/dev/drbd1) 
 AND the data partition (/dev/drbd0) are DRBD mirrored.  It would be 
 silly to have only one or the other saved in a DR scenario.
 
 
 Have you investigated/tested what happens if one end or the other 
 crashes or they lose connectivity for some period of time?   I think 
 drbd has some tricks to deal with that but I don't know how well they 
 work in practice.
   
   
 Well, I ran the BackupPC server without a partner for more than 2 weeks 
 without a hitch...

 Tell you what.  Let me just run downstairs and unplug the network cable 
 from the mirror server.  :o)

 I'll send another note in a couple hours with an account of what 
 happens, then I'll go re-plug it in and report further.
 

Well, that was (expectedly) a boring test.  The BackupPC server was 
running the BackupPC_nightly jobs when I pulled the mirror's net 
connection.  At the time there were no out-of-sync entries.  Upon 
network break the master reported PingAck did not arrive in time and 
the peer was marked as Unconnected.  Before plugging the mirror back in, 
the nightly job had finished.  A total of 860412K was marked as out of 
sync.  Plugging the network cable back in caused a resync to 
automatically run.  All is right with the world.

More fun would be to interrupt the synchronization during heavy access 
(during backups or trashclean).  Then mark the backup as Primary see if 
it mounts.

Perhaps a test for another day.  Truly I expect it would be akin to 
mounting after a crash.
 Thanks - I'm interested in the theory as well as the practice on this 
 but didn't really expect it to work very well so I haven't followed the 
 software for a while.  Does it have some kind of diagnostics to see if 
 the partners are fully in sync

Yes.  http://www.drbd.org/users-guide-emb/s-online-verify.html

Chris




--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC and DRBD - My experience so far

2009-09-15 Thread Chris Robertson
In short, it works for me.

Machine specs:

CPU : Intel Xeon X3320 (Quad Core @2.50GHz)
Memory: 8GB DDR2-667 ECC
Storage Controller: Adaptec 51645 (BIOS  Firmware 5.2-1 17380, driver 
1.1-5 2465)
Drives: 16 Seagate ST31000340NS (1TB ES.2) w/AN05 firmware
OS: CentOS 5.3

[r...@archive-1 ~]# uname -a
Linux archive-1.gcimbs.net 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:21:56 
EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[r...@archive-1 ~]# rpm -q kmod-xfs
kmod-xfs-0.4-2
[r...@archive-1 ~]# rpm -q drbd83
drbd83-8.3.2-6.el5_3
[r...@archive-1 ~]# rpm -q kmod-drbd83
kmod-drbd83-8.3.2-6.el5_3

/data is an XFS file system (with an external journal) mounted 
(noatime,nodiratime,logdev=/dev/drbd1,logbufs=8,logbsize=262144) on top 
of  DRBD on  a RAID 6 setup provided by the fore mentioned Adaptec.

[r...@archive-1 ~]# df -h /dev/drbd0
FilesystemSize  Used Avail Use% Mounted on
/dev/drbd03.0T  984G  2.1T  33% /data
[r...@archive-1 ~]# df -i /dev/drbd0
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/drbd0   1932728448 39835171 18928932773% /data

BackupPC Server Status

* The servers PID is 23490, on host archive-1.gcimbs.net, version
  3.1.0, started at 2009-09-09 18:24.
* This status was generated at 2009-09-15 14:08.
* The configuration was last loaded at 2009-09-09 18:24.
* PCs will be next queued at 2009-09-15 15:00.
* Other info:
  o 0 pending backup requests from last scheduled wakeup,
  o 0 pending user backup requests,
  o 0 pending command requests,
  o Pool is 869.19GB comprising 12913308 files and 4369
directories (as of 2009-09-15 13:27),
  o Pool hashing gives 2919 repeated files with longest chain 609,
  o Nightly cleanup removed 1038178 files of size 52.19GB
(around 2009-09-15 13:27),
  o Pool file system was recently at 33% (2009-09-15 14:00),
today's max is 33% (2009-09-15 09:00) and yesterday's max
was 33%.

The observant will notice that my pool has grown significantly since 
August 27 
(http://article.gmane.org/gmane.comp.sysutils.backup.backuppc.general/20554).  
This is due to changing the way my biggest host (a web server) was 
backed up.  Suffice it to say, the change was not beneficial and has 
been reverted.

BackupPC: Host Summary

* This status was generated at 2009-09-15 14:34.
* Pool file system was recently at 33% (2009-09-15 14:00), today's
  max is 33% (2009-09-15 09:00) and yesterday's max was 33%.

Hosts with good Backups

There are 126 hosts that have been backed up, for a total of:

* 500 full backups of total size 2562.94GB (prior to pooling and
  compression),
* 1835 incr backups of total size 909.94GB (prior to pooling and
  compression).

The host information given in the linked email is still accurate.  I 
have since spun up a nearly identical server* on the far end of a GPON 
link.  The initial synchronization was allowed to transfer without a 
speed cap (and averaged around 600Mbit per second over a 12 hour 
period).  The latency between the two hosts is minimal...

[r...@archive-1 ~]# ping -c10 archive-2.gcimbs.net
PING archive-2.gcimbs.net (66.223.232.56) 56(84) bytes of data.
64 bytes from 66.223.232.56: icmp_seq=1 ttl=64 time=0.450 ms
64 bytes from 66.223.232.56: icmp_seq=2 ttl=64 time=0.484 ms
64 bytes from 66.223.232.56: icmp_seq=3 ttl=64 time=0.459 ms
64 bytes from 66.223.232.56: icmp_seq=4 ttl=64 time=0.477 ms
64 bytes from 66.223.232.56: icmp_seq=5 ttl=64 time=0.491 ms
64 bytes from 66.223.232.56: icmp_seq=6 ttl=64 time=0.461 ms
64 bytes from 66.223.232.56: icmp_seq=7 ttl=64 time=0.434 ms
64 bytes from 66.223.232.56: icmp_seq=8 ttl=64 time=0.451 ms
64 bytes from 66.223.232.56: icmp_seq=9 ttl=64 time=0.478 ms
64 bytes from 66.223.232.56: icmp_seq=10 ttl=64 time=0.435 ms

--- archive-2.gcimbs.net ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9004ms
rtt min/avg/max/mdev = 0.434/0.462/0.491/0.018 ms

...even though they have more than a mile of physical separation.  I 
don't currently have good data as to the bandwidth utilization during 
backups (the DRBD config is set to limit it to 10M, which is about 
110Mbit/sec with TCP overhead), but the BackupPC_nightly and 
BackupPC_trashclean give an average 5Mbit/sec combined.  Over a 24 hour 
period the servers have passed nearly 80GB of data between them (78GB 
from the source, 2GB from the target).

There has been no discernible effect to the amount of time it takes to 
backup my hosts.

If you have any questions, or feel there is anything I was not clear 
about, feel free to ask.

Chris

* Its CPU is an Intel Xeon E3110 @3.0 GHz, with the other specs the 
same.  To be honest, the target does not need to be anything special (as 
long as it stays a target).  Even during the initial synchronization it 
was mostly idle.



Re: [BackupPC-users] Setting up a new BackupPC server

2009-09-15 Thread Chris Robertson
James Ward wrote:
 I forgot to mention there are 16 disks in the big array.  So you'd 
 recommend RAID5 or 6?

Your best bet is to set it up and run some benchmarks*.  Anything else 
is just speculation.

For what it's worth, I have a similar setup (Intel Xeon X3320 Quad Core, 
8GB RAM, 16 drives on an Adaptec 51645).  I went with RAID 6 with a hot 
spare for my BackupPC partition and formatted it XFS.

I'm backing up (and expiring) around 50GB of data nightly across ~115 
hosts in a ~10 hour window (backups start at 21:00 and have mostly 
completed by 07:00).

As always, your millage may vary.

Chris

*Good luck finding anything even remotely resembling BackupPC's actual 
work load.  Especially once you have any kind of backup history.


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] periodically e2fsck the device /var/lib/backuppc

2009-09-08 Thread Chris Robertson
Matthias Meyer wrote:
 Is there a way to retain the job queue? Or to check if anything is in it?
   

Not that I'm aware of.

In theory, storing the job queue over a shutdown shouldn't be tough (it 
should just be a matter of writing a construct to a file, and reading it 
in on startup).  At the moment I don't have the time to look into it.

 Matthias
   

Chris


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] periodically e2fsck the device /var/lib/backuppc

2009-09-04 Thread Chris Robertson
Matthias Meyer wrote:
 Hello,

 I plan to periodically e2fsck my /var/lib/backuppc.
 I want to write a bash script which check if BackupPC_dump is running.
 If not, it will stop backuppc, unmount the device and run
 e2fsck -fp $device

 What is about BackupPC_link? Should I check for this process too?
   

Yes.

 There are further processes for which I should wait before unmounting the
 device?
   

BackupPC_nightly

BackupPC_trashClean is started by the BackupPC service, but the dump and 
nightly jobs are (by default) schedule based, and the link follows a 
dump.  The job queue is not retained over a shutdown-startup cycle.

 Thanks
 Matthias
   

Chris


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Starting BackupPC without using init.d scripts

2009-08-28 Thread Chris Robertson
Volker Thiel wrote:
 Am 27.08.2009 um 23:15 schrieb Chris Robertson:

   
 Volker Thiel wrote:
 
 Also, I'd like to know if there's a way to start BackupPC in daemon
 mode?
   
 /path/to/installation/bin/BackupPC -d
 

 Sometimes it is as simple as this. :) Where can I find information  
 about additional startup parameters (like the log file, etc.)?
   

-d appears to be the only command line argument accepted.

 Volker Thiel
   

Chris


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc for large scale env

2009-08-27 Thread Chris Robertson
Brent Clark wrote:
 Hiya

 I got quite a few servers around the world that I need to backup.

 The question I would like to ask is, how does backuppc scale to the other 
 backup solutions, in large environments. Also for those running backuppc for 
 large scale environment, would you be so kind as to share your
 experiences, tell tales of problems, obstacles, and how too you over came it.

 If anyone can assist or help, and share your story it would be greatly be 
 appreciated.

 Kind Regards
 Brent Clark.

Well, I think I have a pretty large setup, so here you go...

[r...@archive-1 ~]# cat /etc/redhat-release
CentOS release 5.3 (Final)

[r...@archive-1 ~]# uname -a
Linux archive-1.gcimbs.net 2.6.18-128.1.16.el5 #1 SMP Tue Jun 30 
06:07:26 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

[r...@archive-1 ~]# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 23
model name  : Intel(R) Xeon(R) CPU   X3320  @ 2.50GHz
[SNIP]

[r...@archive-1 ~]# cat /proc/meminfo
MemTotal:  8179984 kB
MemFree: 99864 kB
Buffers: 18972 kB
Cached:4413596 kB
SwapCached: 476208 kB
Active:2687372 kB
Inactive:  2385616 kB
[SNIP]

/data is an XFS file system (with an external journal) mounted 
(noatime,nodiratime,logdev=/dev/drbd1,logbufs=8,logbsize=262144) on top 
of (as of yet, unsynchronized) DRBD* on  a RAID 6 setup provided by an 
Adaptec 51645 with 16 Seagate ST31000340NS drives (13 data, 2 parity, 1 
hot spare).

[r...@archive-1 ~]# df -h /dev/drbd0
FilesystemSize  Used Avail Use% Mounted on
/dev/drbd03.0T  507G  2.6T  17% /data

[r...@archive-1 ~]# df -i /dev/drbd0
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/drbd0   1932728448 36671135 18960573132% /data

BackupPC Server Status

General Server Information

* The servers PID is 22319, on host archive-1.gcimbs.net, version
  3.1.0, started at 2009-08-11 14:43.
* This status was generated at 2009-08-27 12:12.
* The configuration was last loaded at 2009-08-18 10:26.
* PCs will be next queued at 2009-08-27 13:00.
* Other info:
  o 0 pending backup requests from last scheduled wakeup,
  o 0 pending user backup requests,
  o 0 pending command requests,
  o Pool is 382.63GB comprising 9959365 files and 4369
directories (as of 2009-08-27 11:58),
  o Pool hashing gives 2986 repeated files with longest chain 623,
  o Nightly cleanup removed 845394 files of size 45.10GB (around
2009-08-27 11:58),
  o Pool file system was recently at 17% (2009-08-27 12:05),
today's max is 17% (2009-08-27 09:00) and yesterday's max
was 17%.


BackupPC: Host Summary

* This status was generated at 2009-08-27 12:13.
* Pool file system was recently at 17% (2009-08-27 12:05), today's
  max is 17% (2009-08-27 09:00) and yesterday's max was 17%.

Hosts with good Backups

There are 125 hosts that have been backed up, for a total of:

* 499 full backups of total size 2512.20GB (prior to pooling and
  compression),
* 1849 incr backups of total size 466.60GB (prior to pooling and
  compression).


My largest host is almost 90GB, with 1.3 million files.  My average host 
size is around 5GB and 100,000 files.  My blackout period is from 06:00 
to 20:30.I run 8 concurrent backups, and rarely see a backup running 
past 08:00.  BackupPC_nightly (the pool culling) starts at 09:00 and 
covers 1/8th of the pool at a time with 4 threads.  It is currently 
completing in around 3 hours.  I run a FullPeriod of 6.6 days and an 
incremental period of .6 days.  My FullKeepCnt is 4, and my IncrKeepCnt 
is 14.  IncrLevels is set to 1,2,3.

If you have any questions, or feel there is anything I was not clear 
about, feel free to ask.

Chris

* So far the DRBD abstraction layer seems to be having a minimal effect 
on performance.  One of these days I'll get the mirror server online and 
start synchronizing.  For what it's worth:

[r...@archive-1 ~]# rpm -q kmod-drbd82 drbd82
kmod-drbd82-8.2.6-2
drbd82-8.2.6-1.el5.centos


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Starting BackupPC without using init.d scripts

2009-08-27 Thread Chris Robertson
Volker Thiel wrote:
 Hello everyone, I'm currently trying to install BackupPC on the QNAP  
 TS-509 Pro NAS system. So far I managed to get through the  
 installation and I can start the server by calling the executable as  
 user backuppc: /path/to/installation/bin/BackupPC

 This call results in the PID printed on the command line and I can  
 communicate with the server as described in the documentation  
 (Installing BackupPC, Step 7: Talking to BackupPC). I'd like to set up  
 the init.d script, but none of the scripts generated during the  
 installation worked. I guess the linux distribution used on the QNAP  
 NAS is too different from Debian, Suse, RedHat and the like. I *could*  
 start digging into System V, but maybe someone here has a working  
 solution already?
   

Perhaps...  http://forum.qnap.com/viewtopic.php?t=192#p679

 Also, I'd like to know if there's a way to start BackupPC in daemon  
 mode?

/path/to/installation/bin/BackupPC -d

  Maybe I missed that section in the docs, but judging from the  
 generated init.d scripts it seems as if it relies on some system calls  
 to start in daemon mode.

 Volker Thiel

Chris


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No emails being sent for backup failures

2009-08-27 Thread Chris Robertson
txoof wrote:
 Update:

 BackupPC sent an email this morning for the system in question.  The mail was 
 for a system that has had 16.5 days since its last backup.  Is there a 
 setting that I'm missing?  Or is this controlled only by EMailNotifyMinDays?  
 From my understanding of the documents, the email should come after 3 days of 
 no backups.  Any ideas what's going on?

http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_emailnotifyoldbackupdays_

 $Conf{EMailNotifyOldBackupDays} = 7.0;

 How old the most recent backup has to be before notifying user. 
 When there have been no backups in this number of days the user is 
 sent an email.

See also 
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_emailnotifymindays_

 $Conf{EMailNotifyMinDays} = 2.5;

 Minimum period between consecutive emails to a single user. This 
 tries to keep annoying email to users to a reasonable level. Email 
 checks are done nightly, so this number is effectively rounded up (ie: 
 2.5 means a user will never receive email more than once every 3 days).

To do what you are trying to do, change EMailNotifyOldBackupDays to 3 
and EMailNotifyMinDays to 0.5.

Chris


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Missing items in web interface?

2009-08-27 Thread Chris Robertson
wirehead wrote:
 I've recently been installing backuppc (for the first time) on CentOS 5.3. 

 I had to set selinux to permissive in order to get it to run - for some 
 reason sealert doesn't exist on my system and I couldn't figure out how else 
 to view the selinux logs to debug it. However, that's a different issue - I'm 
 just mentioning it in case it's related somehow. 

 Currently, I have a functioning web interface, but there are only five links 
 on the left side of the page - Status, Host Summary, Documentation, Wiki, and 
 SourceForge. As well, there's a searchbox above those items - but that's it. 
 There's nothing else on the page. 

 I do not have any configured hosts yet, as I had intended to use the web 
 configuration interface described in the CentOS howtos - 
 http://wiki.centos.org/HowTos/BackupPC#head-1832e54eada657b0febe86d35977cbfac33a5fc8

 However, that howto, and others I've looked at, references links on the 
 backuppc web interface that simply don't seem to exist for me. 

 I know I'm probably doing something stupid, and I know the question has 
 probably been answered a million times before, but I don't seem to be able to 
 construct a search string using the forum tools that gets me less than 
 several thousand results. 

 Any ideas?

My guess would be that you haven't set up authentication in Apache, or 
the authentication used does not match an administrative user.

http://backuppc.sourceforge.net/faq/BackupPC.html#cgi_user_interface_configuration_settings

Chris


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Remove hosts from status

2009-08-11 Thread Chris Robertson
Adam Goryachev wrote:
 I've removed a hosts from backuppc (hosts file, removed the config file,
 done a rm -rf /var/lib/backuppc/pc/hostname and the host no longer shows
 up in the web interface in the dropdown, host summary page, etc.

 However, I still get information on the host from this command:
 /usr/share/backuppc/bin/BackupPC_serverMesg status hosts
 I've slightly modified the output to make it more readable, but this is
 what it shows about the host I want to get rid of:
 somehost =
   {lastGoodBackupTime = 1242909317,
   deadCnt = 2,
   reason = Reason_nothing_to_do,
   activeJob = 0,
   state = Status_idle,
   aliveCnt = 79,
   endTime = ,
   error = host not found,
   needLink = 0,
   type = incr,
   startTime = 1249656051,
   userReq = undef},

 Could anyone suggest what else I need to do to stop the host showing up
 in the output here?
   

Have you tried stopping and re-starting the BackupPC server process?  
I'd imagine it has a client hash memory mapped.

 Thanks,
 Adam

Chris


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Strange Errors

2009-08-11 Thread Chris Robertson
Adam Goryachev wrote:
 I'm trying to backup a remote host which has recently had a lot of
 changes. Initially it kept getting read errors, but after manually (from
 the command line) re-running the full backup, it almost completed
 (continuing the partial each time). However, eventually the scheduled
 backup started in parallel with my manual backup, which screwed
 everything up resulting in the new directory being deleted and the
 backups folder still listing the backup as existing.

 Re-starting backups resulted in failures because backup #18 directory
 didn't exist, so it couldn't continue the backup which backuppc thought
 it should be able to do...

 Eventually I deleted the line from the backups file, and re-started a
 backup which managed to start fresh, and start downloading all the
 changes since the last backup #17. However, now it always gets to the
 same file and I get these errors:
  create d 700   544/513   0 Documents and
 Settings/Administrator/Local Settings/Temp/TGFDBs
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)
 Unable to read 992000 bytes from
 /var/lib/backuppc/pc/office/new//fcdrive/RStmp got=52736,
 seekPosn=37696000 (9728,256,10001,37748736,38331424)

 I thought the RStmp file was used by backuppc when the file being backed
 up was too large to compress in memory or something like that...
   

Right.  If I'm reading the source correctly, RStmp is used in the rsync 
transfer method if the file currently being backed up:

1) Exists in a previous backup (as a compressed pool file)
2) Has changed since that previous backup
and
3) is greater than 16MB in size.

In this case, the previously-backed-up-file that matches the 
currently-being-backed-up file is written to RStmp, so the matching 
sequence of blocks (in the old and new versions) can be read for the 
rsync transfer (it's not possible to perform seeks on the compressed file).

Again if I'm reading the source correctly, the error you are seeing 
indicates the attrib file for the file in question states that the 
file's uncompressed size is 38331424 bytes, but when uncompressed, the 
file is only 37748736 bytes long.  Here's a patch to make the error 
message describe the actual file from the previous backup (so you can 
better verify my findings):

--- lib/BackupPC/Xfer/RsyncFileIO.pm.chris  2008-12-15 
12:36:20.0 -0900
+++ lib/BackupPC/Xfer/RsyncFileIO.pm2009-08-11 11:53:58.0 -0800
@@ -1049,7 +1049,7 @@
my $got = sysread($fio-{rxInFd}, $data, $len);
 if ( $got != $len ) {
my $inFileSize = -s $fio-{rxInName};
-$fio-log(Unable to read $len bytes from 
$fio-{rxInName}
+$fio-log(Unable to read $len bytes from 
$attr-{fullPath}
 .  got=$got, seekPosn=$seekPosn
 .  ($i,$thisCnt,$fio-{rxBlkCnt},$inFileSize
. ,$attr-{size}));

 Any suggestions or thoughts would be appreciated.

 Regards,
 Adam

Chris


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bug in Backuppc causing unnecessary re-transfer of data with rsync

2009-07-23 Thread Chris Robertson
Adam Goryachev wrote:
 One of my remote server being backed up had a very large log file, which
 was growing rather quickly (4GB growing at 2k/sec). This caused the
 backup to timeout sometimes...

 Anyway, see an extract of the log file which 'causes' the problem:
 Executing DumpPreUserCmd: /etc/backuppc/scripts/host1tunnel-start.sh
 incr backup started back to 2009-07-19 00:24:59 (backup #336) for
 directory cdrive
 Connected to 127.0.0.1:1515, remote version 29
 Negotiated protocol version 28
 Connected to module cdrive
 Sending args: --server --sender --numeric-ids --perms --owner --group -D
 - --links --times --block-size=2048 --recursive -D --checksum-seed=32761
 - --one-file-s
 ystem --bwlimit=26 . .
 Checksum caching enabled (checksumSeed = 32761)
 Sent exclude: PartImage
 Sent exclude: hcn/BACKUP
 Sent exclude: hcn/Backups
 Sent exclude: users/Michael
 Sent exclude: users/User1
 Xfer PIDs are now 12320
   create d 75518/544   0 .
   create d 75518/544   0 AVPDOS
   create d 75518/544   0 BCBackup
   create d 75518/544   0 BCBackup/BlueData
 etc, continues as per normal until:
   create d 75518/544   0 Program Files/eScan/Updater
   create d 75518/544   0 Program Files/eScan/VXP64
   create d 75518/544   0 Program Files/eScan/Vista
   create d 75518/544   0 Program Files/eScan/Vista/AVCBack
   create   64418/544   17771 Program
 Files/eScan/Vista/AVCBack/avp._et
   create   64418/544  122214 Program
 Files/eScan/Vista/AVCBack/avp._lb
   create   64418/544   17771 Program
 Files/eScan/Vista/AVCBack/avp_ext._et
   create   64418/544   17771 Program
 Files/eScan/Vista/AVCBack/avp_x._et
 Read EOF:
 Tried again: got 0 bytes
   

This indicates there was a problem with the connection.  While trying to 
synchronize Program Files/eScan/Vista/AVCBack/avp_x.et an End Of File 
was unexpectedly encountered.  The transfer was attempted again, but no 
data was received.  I usually see this when my SSH connection (between 
my BackupPC server and the client) is interrupted.  Since this was an 
incremental backup...

 finish: removing in-process file Program
 Files/eScan/Vista/AVCBack/base279c._vc
   delete   64418/544   54207 Program
 Files/eScan/Vista/AVCBack/base319c._vc
   delete   64418/544   50464 Program
 Files/eScan/Vista/AVCBack/base649c._vc
   delete   64418/544  165934 Program
 Files/eScan/Vista/AVCBack/fa001._vc
   delete   64418/544   53160 Program
 Files/eScan/Vista/AVCBack/base675c._vc
   delete   64418/544   49358 Program
 Files/eScan/Vista/AVCBack/unp037._vc
   delete   64418/544   41666 Program
 Files/eScan/Vista/AVCBack/base772c._vc
   delete   64418/544   50984 Program
 Files/eScan/Vista/AVCBack/base475c._vc
   delete   64418/544   52555 Program
 Files/eScan/Vista/AVCBack/base421c._vc
   delete   64418/544   51594 Program
 Files/eScan/Vista/AVCBack/base641c._vc

 - From here on, all the files in this directory are delete even though
 they were never removed from the system and then:

   delete d 75518/544   0 System Volume Information
   delete   64418/544 211 boot.ini
   delete   44418/544  297072 ntldr
   delete d 75518/544   0 RECYCLER
   delete   64418/544   24064 Walter.doc
   delete d 75518/544   0 hcn
   delete d 75518/544   0 rsyncd
   delete d 75518/544   0 myob12
   delete d 75518/544   0 TEMP
   delete d 75518/544   0 users
   delete d 75518/544   0 var
   delete d 75518/544   0 TempData
   delete   64418/544 208 bootini.ins
   delete d 75518/544   0 wmpub
   delete d 75518/544   0 WINDOWS
   delete d 75518/544   0 cygwin
   delete   64418/544 248 back.cmd
   delete d 75518/544   0 TempEI4
   delete   64418/544 522 RHDSetup.log
 Child is aborting
 Done: 74 files, 4334969856 bytes
   

...it is aborted, and thrown away.

 ie, all the remaining directories are also deleted.

 I'm using BackupPC 3.1.0 from Debian Lenny, connecting to a windows box
 running rsyncd via SSH running on a Debian Etch box on the same subnet
 as the windows box.

 Linux (Backuppc) --SSH-- Linux (middle) --rsyncd-- Windows 2003 Server

 Obviously the windows directory itself didn't vanish especially.

 So, the problem is the following day, the backup will use this
 incremental as it's basis (IncrLevels = [1,2,3,4,5,6,7,8,9]) and so all
 files will be re-downloaded and 'pooled' like this:
   

Why are the files being transfered if they are in the pool?  Are they 
pooled from another computer's backup?

   pool 64418/544  126206 Program
 Files/eScan/Vista/AVCBack/base279c._vc
   pool 64418/544   50441 Program
 Files/eScan/Vista/AVCBack/base280c._vc
   pool 64418/544 

Re: [BackupPC-users] Web UI character set

2009-07-23 Thread Chris Robertson
Michał Sawicz wrote:
 Hi there,

 I have a 3.1.0 installation with apache and mod_perl. Everything is fine
 except the messages displayed on the web page.

 Most of the language files are latin1 encoded, except for zh_CN and pl
 which are utf8, because they're incompatible with latin1.

 The problem is I'm getting strings in utf encoded in utf once again. The
 result is 4-byte instead of 2-byte characters (błx82;ęx99;dów
 instead of błędów). It seems that the utf-encoded polish messages are
 encoded once more, treating them as latin1. That does not happen with
 any other locale...
   

http://www.mail-archive.com/a...@perl.apache.org/msg02453.html

 Two less important questions:
 - the 'Speed' parameter in Host Summary - it's empty here, does rsync
 xfer support this?
   

Yes.  At least it does for me.

 - will subsequent full backups download all unchanged files, too?

No.

  if not, what's the difference between full and incremental backups?
   

http://backuppc.wiki.sourceforge.net/Full+vs.+Incremental+Backups

 Cheers and thanks for a great tool.
   

Chris



--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to save all directories expected .....

2009-07-13 Thread Chris Robertson
Adam Goryachev wrote:
 Chris Robertson wrote:
   
 This depends on how your backups are set up (smbclient being a special
 case, apparently), but in general...

 $Conf{BackupFilesExclude} = { '*' = [ '/log' ] };

 ...should prevent a directory named log from being backed up.
 
 Does this also stop a folder called /home/logo or a file called
 /usr/bin/logn ?
   

That depends entirely on the backup transport being used (my example 
apparently won't match directories with smbclient).

That's why I suggested reading the (linked to) documentation of the 
directive.

 ie, afaik, it is a prefix or regex of a pathname
   

It depends on the method being used for the backup.  From the 
documentation...

The exact behavior is determined by the underlying transport program, 
smbclient or tar. For smbclient the exlclude file list is passed into 
the X option. Simple shell wild-cards using ``*'' or ``?'' are allowed.

For tar, if the exclude file contains a ``/'' it is assumed to be 
anchored at the start of the string. Since all the tar paths start with 
``./'', BackupPC prepends a ``.'' if the exclude file starts with a ``/''.

... and then ...

 Users report that for smbclient you should specify a directory followed 
by ``/*'', eg: ``/proc/*'', instead of just ``/proc''.

... but no mention of rsync.  Let's see if RsyncClientCmd gives a 
definite answer ...

$Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host $rsyncPath $argList+';

Full command to run rsync on the client machine. The following 
variables are substituted at run-time:

   $host   host name being backed up
   $hostIP host's IP address
   $shareName  share name to backup (ie: top-level directory 
path)
   $rsyncPath  same as $Conf{RsyncClientPath}
   $sshPathsame as $Conf{SshPath}
   $argListargument list, built from $Conf{RsyncArgs},
   $shareName, $Conf{BackupFilesExclude} and
   $Conf{BackupFilesOnly}

...Not really.  argList is only mentioned in relation to RsyncClientCmd 
and RsyncClientRestoreCmd.  So, in closing, to be clear, my example* 
should be used ONLY as a starting point for performing your own research 
and experimentation.  :o)

 Regards,
 Adam

Chris

* And to be sure, any example you find or are given from anyone or 
anywhere not materially invested in your success...


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Automatically stopping running backups during business hours

2009-07-13 Thread Chris Robertson
Matthias Meyer wrote:
 Adam Goryachev wrote:

   
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Langdon Stevenson wrote:
 
 I have a number of servers at remote sites that get backed up over ADSL
 connections.  Usually the backups run in an hour or two outside of
 business hours which is fine.

 However occasionally a user will add a large number of files to a
 server, causing the backup to take much longer.  This causes a serious
 slow-down of the ADSL connection for staff at the effected site during
 the day.

 What I would like is for BackupPC to automatically stop backups at
 7.00am or some other configurable time if they are still running.

 I have read the documentation and can find no way of doing this, only
 the ability to stop backups from starting during certain periods.

 Can anyone confirm that BackupPC is unable to do what I need?  If so,
 does anyone know of a hack or patch to add this functionality?
   
 As far as I know there is no solution to this within BackupPC, one
 thing you can do is use the bwlimit parameter to rsync, and setup QoS
 on the ADSL to try and keep things reasonable. However, trying to do
 VoIP can still be a challenge, in which case the only solution is to
 manually login and stop the backup...

 I suppose you could have a script which runs at 9am to firewall the
 relevant ports and hence stop the backup, then at 5pm it unblocks again.

 Regards,
 Adam
 

 As an alternative you can place the following script in /etc/crontab.
 #!/bin/bash
 declare -a hosts
 hosts[0]=server1# fill in your own host names
 hosts[1]=server2# whose backup should be canceld
 hosts[2]=server3
 declare -i hostcount=3  # configure your count of killable hosts

 ps ax | grep BackupPC_dump -i | grep -v grep  /tmp/runnings
 while read pid fl1 fl2 duration perl dump fl3 host rest
 do
 for (( a=0; ahostcount; a++ ))
 do
 if [ $host == ${hosts[a]} ]; then kill -9 $pid  /dev/null 
 21; fi
   

The kill signal (-9) should only be used as a last resort.

http://sial.org/howto/shell/kill-9/

(That link is down at the moment, so I'll link to the archive as well: 
http://web.archive.org/web/20080208221340/http://sial.org/howto/shell/kill-9/bonk.jpg)

 done
 done  /tmp/runnings
 exit 0

 br
 Matthias
   

Chris


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to save all directories expected .....

2009-07-10 Thread Chris Robertson
glubby wrote:
 Hi,

 I'm a new user of BackupPC. I'm using it to make a backup of some of my linux 
 servers. It is working pretty well but there is some large log file on my 
 servers I don't need to backup.

 I look over the forum but I didn't find anything.

Heh.  You might also want to read through the documentation...

http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupfilesexclude_

  So, I was wondering if there is someone who knew how to save all directories 
 exepected those whose name are log (for example) ?

This depends on how your backups are set up (smbclient being a special 
case, apparently), but in general...

$Conf{BackupFilesExclude} = { '*' = [ '/log' ] };

...should prevent a directory named log from being backed up.

  If it is possible, of course !!!

 Thank you for your help
 David
   

Chris


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup schedule

2009-07-09 Thread Chris Robertson
Joseph Holland wrote:
 Hi, I have been using BackupPC now for the last year or so but now I 
 want to be able to keep more backups.  I have read the documentation 
 many times, but don't understand exactly which options I need to change 
 (and to what).  I want to keep the last 7 daily backups (1 full, 6 
 incremental), the last 4 weekly full backups, the last 12 monthly full 
 backups and 1 yearly full backup.
   

Look at FullKeepCnt and IncrKeepCount.  If your FullPeriod, IncrPeriod,  
IncrLevels and IncrKeepCount are at the default (0.97, 6.97, [1] and 6 
respectively), then setting FullKeepCount to...

[5, 0, 13]

...will keep the 5 most recent fulls, and 13 every-four-week fulls, 
which is close to what you are asking for.

 Can anyone help me understand which configuration options I need to 
 change in order to achieve this???
   

There is a documentation link in the BackupPC web interface which lays 
out the Configuration Parameters (the section you'd be interested in for 
this is What to backup and when to do it).

 Thanks,


 Joseph.
   

Chris


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] OS X failing to back up

2009-07-07 Thread Chris Robertson
Ted To wrote:
 Hi,

 I'm not sure what the problem is but after reinstalling OS X on my
 wife's laptop, a backup has never been successfully completed.  I did
 not change any of the configuration files and am using xtar as
 suggested at: http://wiki.nerdylorrin.net/wiki/Wiki.jsp?page=BackupPC

 .
 I put the error log of at: http://pastebin.com/m1009bc48

 .  I'm running
 an oldish version of backuppc (2.1.2 on a dapper drake box).  Any help
 will be appreciated.
   

Based on these bits from the error log...

Running: /usr/bin/ssh -q -x -n -l root snoopy /usr/bin/env LC_ALL=C 
/usr/bin/nice -n 20 /usr/bin/xtar -c -v -f - -C /Users --totals 
--exclude=./louise/Documents/Parallels .
Xfer PIDs are now 8958,8957
[SNIP]
Connection to snoopy closed by remote host.
[SNIP]
Backup aborted (lost network connection during backup)
Running: /usr/bin/ssh -q -x -n -l root snoopy /usr/bin/env LC_ALL=C 
/usr/bin/nice -n 20 /usr/bin/xtar -c -v -f - -C /Users --totals 
--exclude=./louise/Documents/Parallels .
Xfer PIDs are now 24340,24339
[SNIP]
Read from remote host snoopy: No route to host

...you seem to have a network connectivity/reliability problem.

 Thanks,
 Ted To

Chris


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have 
the opportunity to enter the BlackBerry Developer Challenge. See full prize 
details at: http://p.sf.net/sfu/blackberry
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Having Several Issues

2009-06-25 Thread Chris Robertson
Admiral Beotch wrote:
 It sounds like this might be helpful for me:

 You can execute the following command as root to relabel your
 computer system:
 touch /.autorelabel; reboot


As an aside, you can get the same effect, without the reboot with 
restorecon -R /.  Using restorecon -Rv / will give verbose output.


 I guess I'll give it a shot and see what happens... Does anyone want 
 to weigh in on whether I should try touch /.autorelabel; touch 
 /BackupData/.autorelabel; reboot since the file system in question is 
 mounted to /BackupData, not '/' ?

Don't bother.  As far as I recall, only the existence of /.autorelabel 
is tested (much like /forcefsck).

Chris


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Having Several Issues

2009-06-25 Thread Chris Robertson
Admiral Beotch wrote:
 I fixed my SELinux problem by changing the context of the mounted 
 partition that holds TOPDIR... I can't say for certain that I got the 
 context 100% accurate, but it seems to be a secure choice given how 
 the httpd process is trying to interact with that part of the file 
 system. The command that fixed everything was:

 chcon -R -t httpd_log_t /backup drive mount point/

httpd_sys_content_t might be a more secure choice, as SELinux might give 
Apache permissions to write httpd_log_t.  But I'm pretty rusty on the 
details.  Also, explicitly setting the context is fine for a temporary 
solution, but if restorecon is ever run, the changes you made might 
not stick.


 Now I am about to see all my host logs and browse their backups while 
 keeping selinux enabled.

About, or able?


 I hope this helps someone else experiencing the same problem.

Indeed.  In any case, http://docs.fedoraproject.org/selinux-apache-fc3/ 
is a good read for securing Apache with SELinux.


 Admiral Beotch

Chris


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How BackupPC determines which host should make the next backup

2009-06-25 Thread Chris Robertson
Les Mikesell wrote:
 Matthias Meyer wrote:
   
 Assumed I have 10 hosts and the maximum number of simultaneous backups to
 run is 5.
 Further assumed all 10 hosts have to make a backup today.
 4 of them have theire last backup made 2 days ago and 3 of them yesterday.
 2 have a partial backup and 1 have no backup.
 9 hosts are reachable (ping) the last times but 1 host not. Therefore the
 BlackoutPeriods for 9 hosts can be applied by backuppc and for 1 host not.
 Nevertheless the time now is not within this blackout period.

 Which 5 hosts will request theire backups?

 Will BackupPC try the hosts in alphabetical order?
 Or will BackupPC use informations about the oldest backup and try this hosts
 at first?
 

 The first cut will be which ones have reached their backup interval 
 (since the last backup) at each wakeup time.  That is, at each wakeup it 
 will consider only the set that is not within a blackout and has passed 
 the interval time since it's last run - and these will not start if the 
 ping test fails.  I'm not sure who wins if there are more choices at one 
 time than your concurrency limit allows - it might just be the order in 
 the host list.  I just start one early from the web interface if I want 
 to push it ahead.
   

 From the source (/usr/local/BackupPC/bin/BackupPC on my machine)...

#
# Compare function for host sort.  Hosts with errors go first,
# sorted with the oldest errors first.  The remaining hosts
# are sorted so that those with the oldest backups go first.
#
sub HostSortCompare
{
#
# Hosts with errors go before hosts without errors
#
return -1 if ( $Status{$a}{error} ne   $Status{$b}{error} eq  );

#
# Hosts with no errors go after hosts with errors
#

return  1 if ( $Status{$a}{error} eq   $Status{$b}{error} ne  );

#
# hosts with the older last good backups sort earlier
#
my $r = $Status{$a}{lastGoodBackupTime} = 
$Status{$b}{lastGoodBackupTime};
return $r if ( $r );

#
# Finally, just sort based on host name
#
return $a cmp $b;
}

Chris



--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Having Several Issues

2009-06-24 Thread Chris Robertson
Admiral Beotch wrote:
 I recently installed BackupPC (BackupPC-3.1.0-3.el5) on a CentOS 5.3 
 server from the epel repo.

 It appears that backups are occurring but I am unable to view host 
 logs or browse backups.

 I can see the data being collected into the TOPDIR/pc directories, but 
 I the statistics indicate there are no successful backups and I get 
 the following errors when I try and view logs or browse backups:

 Error: Backup number for host fw does not exist.
 and
 Can't open log file

Does the log file in question exist?


 I'm not sure if this is related, but under the TOPDIR/pc/host/ 
 directory, the only subdirectory there is listed as 'f%2f'. Under 
 that, each subdirectory are prefixed with 'f'


 [r...@localhost 1]# tree -L 2
 .
 |-- attrib
 |-- backupInfo
 `-- f%2f
 |-- attrib
 |-- fbackup
 |-- fbin
 |-- fboot
 |-- fdev
 |-- fetc
 |-- fhome


Are you SURE that's at $TOPDIR/pc/host/?  That directory structure 
should be in a numbered directory (corresponding to the backup number), 
such as $TOPDIR/pc/host/14/.  Judging by the prompt, you are in a 
numbered (1) sub-directory.  If that's the case, this directory listing 
is normal, and we are interested in the REAL contents of 
$TOPDIR/pc/host/ (including permissions).


 Under the host summary page, I have this information:

 * This status was generated at 6/24 16:02.
 * Pool file system was recently at 78% (6/24 16:01), today's
   max is 79% (6/24 11:31) and yesterday's max was 79%.

 Hosts with good Backups

 There are 0 hosts that have been backed up, for a total of:

 * 0 full backups of total size 0.00GB (prior to pooling and
   compression),
 * 0 incr backups of total size 0.00GB (prior to pooling and
   compression).

  
 Here's the output of the server logfile:

 2009-06-24 01:00:01 Running 2 BackupPC_nightly jobs from 0..15 (out of 
 0..15)
 2009-06-24 01:00:01 Running BackupPC_nightly -m 0 127 (pid=27004)
 2009-06-24 01:00:01 Running BackupPC_nightly 128 255 (pid=27005)

 2009-06-24 01:00:01 Next wakeup is 2009-06-24 02:00:00
 2009-06-24 01:22:24 BackupPC_nightly now running BackupPC_sendEmail
 2009-06-24 01:22:24 Finished  admin1  (BackupPC_nightly 128 255)
 2009-06-24 01:22:28 Finished  admin  (BackupPC_nightly -m 0 127)

 2009-06-24 01:22:28 Pool nightly clean removed 0 files of size 0.00GB
 2009-06-24 01:22:28 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 
 max links), 1 directories
 2009-06-24 01:22:28 Cpool nightly clean removed 0 files of size 0.00GB

 2009-06-24 01:22:28 Cpool is 33.91GB, 686322 files (32 repeated, 2 max 
 chain, 1511 max links), 4369 directories
 2009-06-24 02:00:00 Next wakeup is 2009-06-24 03:00:00
 2009-06-24 02:36:41 Finished full backup on host1

 2009-06-24 02:36:41 Running BackupPC_link host1 (pid=27275)
 2009-06-24 02:40:11 Finished host1 (BackupPC_link host1)
 2009-06-24 03:00:01 Next wakeup is 2009-06-24 04:00:00
 ... 
 2009-06-24 10:00:00 Next wakeup is 2009-06-24 11:00:00

 2009-06-24 10:00:01 Started incr backup on host2 (pid=785, share=/)
 2009-06-24 10:00:01 Started incr backup on fw (pid=786, share=/)
 2009-06-24 10:00:01 Started incr backup on host1 (pid=787, share=/)
 2009-06-24 11:00:00 Next wakeup is 2009-06-24 12:00:00

 2009-06-24 11:15:01 Finished incr backup on host1
 2009-06-24 11:15:01 Running BackupPC_link host1 (pid=978)
 2009-06-24 11:17:13 Finished host1 (BackupPC_link host1)
 2009-06-24 11:33:35 Finished incr backup on fw

 2009-06-24 11:33:36 Running BackupPC_link fw (pid=1059)
 2009-06-24 11:35:57 Finished fw (BackupPC_link fw)
 2009-06-24 11:41:21 Finished incr backup on host2
 2009-06-24 11:41:21 Running BackupPC_link host2 (pid=1073)

 2009-06-24 11:41:56 Finished host2 (BackupPC_link host2)
 2009-06-24 12:00:00 Next wakeup is 2009-06-24 13:00:00
 

 Any ideas what might be causing my issues?

I'd guess either permissions or SELinux.  What's the output of getenforce?


 Thanks in advance!

Chris


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] problems with linux host - Unable to read 4 bytes

2009-06-23 Thread Chris Robertson
Nick Smith wrote:
 Does anyone know if i can change the password to the backuppc user in linux
 and not have any adverse effects with the backuppc system?
   

Yes, you can.  The account doesn't actually NEED a password.

Chris


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling archiving host

2009-06-12 Thread Chris Robertson
Boniforti Flavio wrote:
 Is this meaning that the tarball is gzipped? In that case, what are 
 the parameter that follow the binary path?
   
 /bin/gzip is the compression program
 .gz is the extension of the output filename (blah.gz)
 * I think means all shares...

 Hope that helps a little, there is more information on the 
 various options/parameters on the backuppc wiki from memory.
 

 So the above thoughts of mine were correct. What will the 000 be?

It's the splitSize, or how large the archive files would be allowed to 
get before a new one is created.  In the example given, the archive 
would not be split.

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Lesson in netiquette (was Re: Upgrading from etch to lenny, was backups not working - backupdisable picking up incorrectly)

2009-06-10 Thread Chris Robertson
Tino Schwarze wrote:
 On Wed, Jun 10, 2009 at 01:30:35PM -0400, Jim McNamara wrote:
   
 Have you specifically done a dist-upgrade from etch to lenny?
   

 [...90 lines snipped...]

   
 By the way, top posting (writing above the previous post) is frowned upon by
 most mailing lists. Most mail programs handle it well, but people trying to
 read the thread via archives or on older software have trouble when someone
 writes above the older text.
 

 Full-quoting is about the same league.
   

Heh, not to mention thread hijacking...

(http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg14800.html)

Live and learn.

 SCNR, Tino.
   

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] send emails to customers AND admin?

2009-06-10 Thread Chris Robertson
error403 wrote:
 Hi, I'm trying to find  a way to send an email to the personal email of the 
 people I'm doing their backups for.  I tried to search but the terms email 
 and message are so general it gives me almost all the posts on the forum!  :?

Something like http://linuxgazette.net/issue72/teo.html?

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup from the internet?

2009-06-10 Thread Chris Robertson
Les Mikesell wrote:
 Chris Robertson wrote:
   
 error403 wrote:
 
   I'm thinking of installing/using some sftp server sofware on their 
 computer.
   
   
 Better would be an rsyncd service, as that would allow you to only 
 transfer changes.
 

 If they are unix/linux/mac boxes you can use rsync over ssh.  On windows 
 you can use ssh port forwarding to connect to rsync in daemon mode.
   

Indeed.

Given the mention of NetBios in the original message, I made the 
assumption that Windows clients were (exclusively) involved.  Thanks for 
clarifying the other available options.

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backups not working - backupdisable picking up incorrectly

2009-06-09 Thread Chris Robertson
Steve Redmond wrote:
 Hi,

 I've run in to a bit of a strange issue. We have a large number of
 backups running on backuppc and up until recently they have all been
 working fine. Now I see everything as Idle with aging last backups

 Now, when I attempt to kick backups off manually I get the usual:

   Reply from server was: ok: requested backup of hostname message

 No errors appear in the log other than:

   2009-06-09 15:54:17 User backuppc requested backup of hostname (hostname)



 Going through some debugging steps, I ran this:-

 backu...@backupbox:/usr/share/backuppc/bin$ ./BackupPC_dump -v -f hostname
 Exiting because backups are disabled with $Conf{BackupsDisable} = 2
 nothing to do


 This is however incorrect, as the config file clearly says:-

 $Conf{BackupsDisable} = '0';
   

I assume this means the main config file.  There are also per-host 
config files that will override the main one.  Other alternatives are 
you aren't looking at the same config file that BackupPC is, or 
$Conf{BackupsDisable} is set twice in the main config file.


 Any pointers on further steps for diagnosing this problem?

 Regards,
 - Steve

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backups not working - backupdisable picking up incorrectly

2009-06-09 Thread Chris Robertson
Steve Redmond wrote:
 Hiya,

 Thanks for your reply.

   
 I assume this means the main config file.  There are also per-host 
 config files that will override the main one.  
 

 Correct. I have checked that there were no overriding settings in other
 configuration files. Did a grep on the entire configuration directory to
 be sure and nothing turned up.

   
 Other alternatives are you aren't looking at the same config 
 file that BackupPC is, or $Conf{BackupsDisable} is set twice 
 in the main config file.
 

 It's not set twice, that much is certain

How certain?  ;o)

  - there's only one config file
 in place for it to use.
   

Fair enough, but is it possible that $Conf{BackupsDisable} is set twice 
in this one file?
When you performed your grep, you stated nothing turned up.  Does that 
mean that nothing UNEXPECTED showed up, or nothing at all?
What were the options you passed to grep to perform this search?

To be clear, I don't intend to insult your intelligence, or your 
troubleshooting skills.  I'm just proposing possibilities to explain the 
data given.

 Thanks,
 - Steve

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to use backuppc with TWO HDD

2009-06-02 Thread Chris Robertson
Skip Guenter wrote:
 On Tue, 2009-06-02 at 16:36 +1000, Adam Goryachev wrote:
   
 So, using 4 x 100G drives provides 133G usable storage... we can lose
 any two drives without any data loss. However, from my calculations
 (which might be wrong), RAID6 would be more efficient. On a 4 drive 100G
 system you get 200G available storage, and can lose any two drives
 without data loss.
 

 So isn't this the same as RAID10 w/ 4 drives, 200GB and can lose 2
 drives (as long as they aren't on the same mirror) and no risk of
 corrupted parity blocks?

With RAID 10, if you loose a drive AND it's mirror, your array is 
toast.  While you CAN loose two drives from a RAID 10 array, they have 
to be specific drives.  With RAID 6 you have X data disks and 2 parity 
disks*.  You can loose ANY two disks from a RAID 6 array without data 
loss.  As you add more disks to a RAID 10 array, you have to dedicate 
half of them to mirroring.  With a RAID 6 array, you only need two 
parity disks.  Any more you add are usable to expand the capacity.

Chris

*This statement is simplified, in that neither RAID 5 nor RAID 6 
actually dedicate a spindle (or two) to parity, but interleave it with 
the data.  RAID 3 has a dedicated parity disk, but doesn't get much 
attention.  With RAID 6, if you loose a disk, performance will not 
suffer, as no data is missing.  RAID 5 down one disk and RAID 6 down two 
disks looses either a part of the data (which is calculated from the 
parity data) or the parity data.

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool is 0.00GB comprising 0 files and 0 directories....

2009-05-27 Thread Chris Robertson
Bernhard Ott wrote:
 Ralf Gross wrote:
   
 Hi,

 I use BackupPC since many years without hassle. But something seems to
 be broken now.

 BackupPC 3.1 (source)
 Debian Etch
 xfs fs

 

 Hi Ralf,
 look for the thread no cpool info shown on web interface (2008-04)in 
 the archives, Tino Schwarze found a solution for a xfs-related issue:

   
 I found a fix for lib/BackupPC/Lib.pm (the line with map {...} is
 the relevant one):

 --- lib/BackupPC/Lib.pm 2007-11-26 04:00:07.0 +0100
 +++ lib/BackupPC/Lib.pm   2008-04-13 12:52:03.938619979 +0200
 @@ -485,10 +485,15 @@
  
  from_to($path, utf8, $need-{charsetLegacy})
  if ( $need-{charsetLegacy} ne  );
 -return if ( !opendir(my $fh, $path) );
 +my ($fh);
 +if ( !opendir($fh, $path) ) {
 +   print log ERROR: opendir ($path) failed\n;
 +   return;
 +}
 +
  if ( $IODirentOk ) {
  @entries = sort({ $a-{inode} = $b-{inode} } readdirent($fh));
 -map { $_-{type} = 0 + $_-{type} } @entries;   # make type numeric
 +map { $_-{type} = 0 + $_-{type}; $_-{type} = undef if 
 ($_-{type} eq BPC_DT_UNKNOWN); } 

For what it's worth, this line is the only change that is needed.  
Replacing return if ( !opendir(my $fh, $path) ); above just makes your 
log file larger.

 @entries;   # make type numeric, unset unknown types
  } else {
  @entries = map { { name = $_} } readdir($fh);
  }
 @@ -553,9 +559,11 @@
  return if ( !chdir($dir) );
  my $entries = $bpc-dirRead(., {inode = 1, type = 1});
  #print Dumper($entries);
 +#print (log got ,scalar(@$entries), entries for $dir\n);
  foreach my $f ( @$entries ) {
  next if ( $f-{name} eq .. || $f-{name} eq .  $dontDoCwd );
  $param-{wanted}($f-{name}, $dir/$f-{name});
 +#if ( $f-{type} != BPC_DT_DIR ) { print (log skipping 
 non-directory , $f-{name},  type: , $f-{type}, \n); }
  next if ( $f-{type} != BPC_DT_DIR || $f-{name} eq . );
  chdir($f-{name});
  $bpc-find($param, $dir/$f-{name}, 1);
 


 HTH,
 Bernhard

Chris

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Noob questions

2009-05-22 Thread Chris Robertson
Daniel Carrera wrote:
 Yes. I thought BackupPC was more like a cron job that runs once every 
 hour. My current script runs every 2 hours, so I always know when it's 
 not running. But if BackupPC runs all the time, then that's different.

 Are you aware of any backup tool that might be more suitable for what I 
 need? My script works well and I'm happy with it, but I wouldn't mind 
 getting some additional features like compression and exponential 
 backups which my script doesn't do.

 Daniel.

Check out rdiff-backup (http://rdiff-backup.nongnu.org/).  It sounds 
like a good fit.

Chris

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Total amount of traffic data per session

2009-05-19 Thread Chris Robertson
Boniforti Flavio wrote:
 You will find it in $HOME/pc/host/backups See backuppc 
 online documentation for a description of this file
 

 Hy Matthias,

 the only thing I got in the docs is:

 The file /var/lib/backuppc/pc/$host/backups is read to decide whether a
 full or incremental backup needs to be run. If no backup is scheduled,
 or the ping to $host fails, then BackupPC_dump exits.
   

Further down (under the heading Storage layout) you'll find the 
description of the per-host backups file.  The columns that interest 
you are 3 (Start Time), 4 (End Time) and 6 (size).  Columns 3 and 4 are 
used to compute the Duration in the web interface, but column 6 does not 
appear to be utilized.

 What I want to get is the *real* traffic in MB which has been
 transferred for each host in each backup session, paired with the time
 it took. Actually I do as follows:

 - in the Host Summary I click on one of my hosts;
 - I look at the Backup Summary table and check the value in
 Duration/mins;
 - then I scroll down to File Size/Count Reuse Summary but don't
 actually know which value I should consider as the *real transferred
 data amount*.

 Who knows how to treat and interpret values correctly so I can say: OK,
 host1 took 32 minutes for it's last backup session, in which it
 transferred 326 Mbytes of data?
   

Your choices are:
* modify the source
* make a feature request
* perform a manual calculation

 Thanks again,
 Flavio Boniforti

 PIRAMIDE INFORMATICA SAGL
 Via Ballerini 21
 6600 Locarno
 Switzerland
 Phone: +41 91 751 68 81
 Fax: +41 91 751 69 14
 URL: http://www.piramide.ch
 E-mail: fla...@piramide.ch 

Chris

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables 
unlimited royalty-free distribution of the report engine 
for externally facing server and web deployment. 
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is backuppc with rsyncrypto instead rsync possible

2009-04-29 Thread Chris Robertson
Matthias Meyer wrote:
 Hi,

 I think about an encrypted backup and find rsyncrypto.
 Is there a BackupPC_dump support for rsyncrypto?
 Or any other way to use rsyncrypto with backuppc?
   

 From the looks of it, you would just run the rsyncrypto on the client 
as a pre-backup command, and then backup the resulting (encrypted) files 
using the rsync protocol.

 Thanks
 Matthias
   


Chris

--
Register Now  Save for Velocity, the Web Performance  Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance  Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Defining data retention periods

2009-04-29 Thread Chris Robertson
Holger Parplies wrote:
 Hi,

 Boniforti Flavio wrote on 2009-04-29 14:22:36 +0200 [Re: [BackupPC-users] 
 Defining data retention periods]:
   
 OK, now the main question of my post: data retention.
 [...]
 
 In my actual setup when the oldest FULL gets deleted (because it is now
 older than 4 weeks), do I loose also all the subsequent INCREMENTALS that
 were done?
 

 would you implement it that way? Then why do you assume it is implemented that
 way? Why don't you just, for once, simply wait and see? BackupPC will show you
 for free.

Even better, you could check the documentation...

$Conf{IncrLevels} = [1];

Level of each incremental. ``Level'' follows the terminology of 
dump(1). A full backup has level 0. A new incremental of level N will 
backup all files that have changed since the most recent backup of a 
lower level.

The entries of $Conf{IncrLevels} apply in order to each incremental 
after each full backup. It wraps around until the next full backup. For 
example, these two settings have the same effect:

  $Conf{IncrLevels} = [1, 2, 3];
  $Conf{IncrLevels} = [1, 2, 3, 1, 2, 3];

This means the 1st and 4th incrementals (level 1) go all the way 
back to the full. The 2nd and 3rd (and 5th and 6th) backups just go back 
to the immediate preceeding incremental.

Specifying a sequence of multi-level incrementals will usually mean 
more than $Conf{IncrKeepCnt} incrementals will need to be kept, since 
lower level incrementals are needed to merge a complete view of a 
backup. For example, with

  $Conf{FullPeriod}  = 7;
  $Conf{IncrPeriod}  = 1;
  $Conf{IncrKeepCnt} = 6;
  $Conf{IncrLevels}  = [1, 2, 3, 4, 5, 6];

there will be up to 11 incrementals in this case:

  backup #0  (full, level 0, oldest)
  backup #1  (incr, level 1)
  backup #2  (incr, level 2)
  backup #3  (incr, level 3)
  backup #4  (incr, level 4)
  backup #5  (incr, level 5)
  backup #6  (incr, level 6)
  backup #7  (full, level 0)
  backup #8  (incr, level 1)
  backup #9  (incr, level 2)
  backup #10 (incr, level 3)
  backup #11 (incr, level 4)
  backup #12 (incr, level 5, newest)

Backup #1 (the oldest level 1 incremental) can't be deleted since 
backups 2..6 depend on it. Those 6 incrementals can't all be deleted 
since that would only leave 5 (#8..12). When the next incremental 
happens (level 6), the complete set of 6 older incrementals (#1..6) will 
be deleted, since that maintains the required number 
($Conf{IncrKeepCnt}) of incrementals. This situation is reduced if you 
set shorter chains of multi-level incrementals, eg:

  $Conf{IncrLevels}  = [1, 2, 3];

would only have up to 2 extra incremenals before all 3 are deleted.

BackupPC as usual merges the full and the sequence of incrementals 
together so each incremental can be browsed and restored as though it is 
a complete backup. If you specify a long chain of incrementals then more 
backups need to be merged when browsing, restoring, or getting the 
starting point for rsync backups. In the example above (levels 1..6), 
browing backup #6 requires 7 different backups (#0..6) to be merged.

Because of this merging and the additional incrementals that need to 
be kept, it is recommended that some level 1 incrementals be included in 
$Conf{IncrLevels}.

Prior to version 3.0 incrementals were always level 1, meaning each 
incremental backed up all the files that changed since the last full.

Chris

--
Register Now  Save for Velocity, the Web Performance  Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance  Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is backuppc with rsyncrypto instead rsync possible

2009-04-29 Thread Chris Robertson
Matthias Meyer wrote:
 Chris Robertson wrote:

   
 Matthias Meyer wrote:
 
 Hi,

 I think about an encrypted backup and find rsyncrypto.
 Is there a BackupPC_dump support for rsyncrypto?
 Or any other way to use rsyncrypto with backuppc?
   
   
  From the looks of it, you would just run the rsyncrypto on the client
 as a pre-backup command, and then backup the resulting (encrypted) files
 using the rsync protocol.

 
 Yes that's possible. But I need additional disk space on the client.
 rsyncrypto send modified parts of a file to the destination disk as well as
 rsync do.
   

Rsyncrypto doesn't seem to send the files directly.  From the tutorial 
(http://www.linux.com/feature/125322) linked from the home page 
(http://rsyncrypto.lingnu.com/index.php/Home_Page):

rsyncrypto is designed to be used as a presync option to rsync. That 
is, you first use rsyncrypto on the plain unencrypted files to obtain an 
encrypted directory tree, and you then run rsync to send that encrypted 
tree to a remote system.

The tag line on the Sourceforge page 
(http://sourceforge.net/projects/rsyncrypto) states rsync friendly file 
encryption.  On that same page, the description states A slightly 
reduced strength bulk encryption. In exchange for the reduced strength, 
you get the ability to rsync the encrypted files, so that local changes 
in the plaintext file will result in (relatively) local changes to the 
cyphertext file.

Perhaps you have a different source of information?

 So I am looking for a solution that backuppc can use this files.

 br
 Matthias
   

Chris

--
Register Now  Save for Velocity, the Web Performance  Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance  Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3.2.0beta0 released

2009-04-09 Thread Chris Robertson
Craig Barratt wrote:
 BackupPC 3.2.0beta0 has been released on SF.net.
 3.2.0beta0 is the first beta release of 3.2.0.

 3.2.0beta0 has several new features and quite a few bug fixes
 since 3.1.0.  New features include:

I didn't see any mention of lib/BackupPC/Lib.pm being updated for the 
case when XFS is used as the pool file system  and IO::Dirent is 
installed (as per 
http://www.mail-archive.com/backuppc-de...@lists.sourceforge.net/msg00195.html).

Looking at the source of the Beta, this looks like the changes were not 
implemented.  Was it just missed, or is there another reason?

Chris

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How do I expire backups from disabled hosts?

2009-04-01 Thread Chris Robertson
Tino Schwarze wrote:
 Hi there,

 I've got several retired hosts and want to keep only the latest backup
 from them. I've set $Config{BackupsDisable}=2 in the server's config.pl
 and $Config{FullKeepCntMin}=1 but backups are still kept (I see these
 values if I edit the config via web interface).
   

You can edit values for hosts individually.  Use the drop down box, or 
click a link (from the Host Summary page, or the LOG file etc.) to get 
to the Host Backup Summary.  On the right hand side, there is an Edit 
Config link (two actually, one for the host and one for the general 
config).  Change the IncrKeepCnt and/or FullKeepCnt for the host under 
Schedule.  Make sure the box next to Override is checked.

Obviously you could also just create or edit the host specific config 
files directly, but using the web interface is probably less error prone.
 BackupPC still tries to ping these hosts (unsuccessfully).
   

Fixed in 3.2.  From 
http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg13757.html:

* Changed bin/BackupPC_dump to not ping or lookup the host if
  $Conf{BackupsDisable} is set.  Requested by John Rouillard.


 Where to look next?

 Thanks,

 Tino.
   

Chris


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How scalable is backuppc?

2009-03-17 Thread Chris Robertson
Mike Dresser wrote:
 Chris Robertson wrote:
   
 How many hosts do you back up?
   
 
 About 30 are active, 12 are sporadic (laptops, etc).  Total that gets 
 written out to off site backup is about 300GB of data a day, compressed.
   
 What does df -i show for the mount point?
   
 
 /dev/sdb1   6.4G 19M6.3G1%
   

Yeah, I had really good performance when I was running around 20M 
inodes.  But the more I look the less I think that's related to my problem::

100 TB of 1 MB files (~100 M files):
http://oss.sgi.com/archives/xfs/2007-10/msg00320.html
...billions of 1k files...
http://www.mail-archive.com/linux-r...@vger.kernel.org/msg05100.html

 Did you use an external log?
   
 
 No.
   
 Did you use any other specific optimizations when creating the file 
 system (version 2 log, modified suint and/or swidth)?
   
 
 No.  Default options for Debian etch.  This filesystem has been 
 xfs_growfs'd a few times after being dd'd to a new volume, since it's 
 impossible to back it up with xfsdump|xfsrestore.  I mount with noatime, 
 and logbufs=8

   
 How big is the filesystem?
   
 
 6.4TB, and 2.8TB is in use, with 2TB or so of that being backuppc's.
   
 [r...@archive-1 ~]# xfs_db -c frag -r /dev/sdb1
 actual 41673387, ideal 40843714, fragmentation factor 1.99%
 
 actual 10977124, ideal 10669547, fragmentation factor 2.80%
   

Thanks for the numbers.  I'm starting to think my problems might be 
related to the kernel I'm running (default Centos 5.2, with xfs-kmod).  
It's been years since I rolled my own kernel, but I might just have to 
break out the compiler...

 Mike

Chris

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How scalable is backuppc?

2009-03-17 Thread Chris Robertson
Les Mikesell wrote:
 Chris Robertson wrote:
   
 Thanks for the numbers.  I'm starting to think my problems might be 
 related to the kernel I'm running (default Centos 5.2, with xfs-kmod).  
 It's been years since I rolled my own kernel, but I might just have to 
 break out the compiler...
 

 Any chance of a more drastic change?  That load looks like it would be a 
 great test for something that can run zfs...
   

Why not?  :o)  As long as it can continue to access the XFS volume I 
have (for the month until my current backups would time out), I'm game.

Chris

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How scalable is backuppc?

2009-03-13 Thread Chris Robertson
Mike Dresser wrote:
 Chris Robertson wrote:
   
 Hopefully my original message didn't come across as negative of either 
 XFS or BackupPC.  Due to how well BackupPC and XFS handled the load I 
 threw at it initially, I expanded the retention policy of my backups 
 without thought, planning or proper research and testing.  As a result 
 of this rash behavior, I'm experiencing some teething problems.
   
 
 We keep backups for 36 monthlies, 4 weeklies, and 8 dailies.. haven't 
 seen much degradation in performance from that.
   

In the interest of differences in setup, and why you are not having 
troubles, I have a few questions to ask...

How many hosts do you back up?
What does df -i show for the mount point?
Did you use an external log?
Did you use any other specific optimizations when creating the file 
system (version 2 log, modified suint and/or swidth)?
How big is the filesystem?
How full is it?

 Do you run an xfs_fsr on the filesystem now and then to keep 
 fragmentation down?
   

Daily.
[r...@archive-1 ~]# xfs_db -c frag -r /dev/sdb1
actual 41673387, ideal 40843714, fragmentation factor 1.99%

 Mike

Chris

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How scalable is backuppc?

2009-03-12 Thread Chris Robertson
Mike Dresser wrote:
 Matthias Meyer wrote:
   
 Dear all,

 How scalable is backuppc?
 Where are the limits or what can produce performance bottlenecks?

 I've heard about hardlinks which can be a problem if theire are millions of
 it. Is that true?
   
 
 The file system can become... interesting to fix or backup when you get 
 a few million hard links, especially if you're using XFS.

Amen to that...

[r...@archive-1 trash]# df -i /dev/sdb1
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/sdb16442438528 55813203 63866253251% /data
[r...@archive-1 trash]# mount |grep /dev/sdb1
/dev/sdb1 on /data type xfs 
(rw,noatime,nodiratime,nobarrier,logbufs=8,sunit=512,swidth=6656,logbsize=262144)
[r...@archive-1 trash]# cat /etc/redhat-release
CentOS release 5.2 (Final)
[r...@archive-1 trash]# uname -srvmp
Linux 2.6.18-92.1.18.el5 #1 SMP Wed Nov 12 09:19:49 EST 2008 x86_64 x86_64

   There _appears_ to be some bugs in Debian etch's xfs tools, last time I had 
 to 
 run an xfs_repair -n on etch it took 6 days.. after upgrading to lenny 
 it takes about 10 minutes or so.  I'm guessing there were some major 
 improvements in the xfs tools from etch to lenny with regards to memory 
 usage.
   
 Anyone would share his experience about CPU usage in relation to bandwidth
 usage? Particularly by using rsync?
   
 
 I see about 2-15m/s on our rsync backups   CPU usage is pretty low, it's 
 mostly disk i/o that holds it up.  The newer faster machines back up 
 faster, so there's a bottleneck on the clients as well.
   

Same story here.  CPU is, to a close approximation, never a problem.  
For an added twist, most of my clients are on the far end of a satellite 
link.  After the initial backups, even running 16 backups in parallel, I 
hardly peak above 4Mbit/sec (per iftop).

 I have a 2x1GHz Server with 2GB RAM and 4x500GB SATA Disks in Software
 Raid5.
   
 
 server here is a 2x1.8ghz opteron 265, 5GB ram, 8x1TB with 3ware raid5.  
 Backup window is set from 18:00 to 23:00, it generally finishes all the 
 backups within that 5 hours.  Full's every 8 days, incr every day.

 * Pool is 1852.64GB comprising 8466234 files and 4369 directories
   (as of 3/12 02:26),
 * Pool hashing gives 9654 repeated files with longest chain 85,
 * Nightly cleanup removed 7189 files of size 35.62GB (around 3/12
   02:26),
 * Pool file system was recently at 44% (3/12 12:05), today's max is
   44% (3/12 00:00) and yesterday's max was 44%.

 There are 42 hosts that have been backed up, for a total of:

 * 789 full backups of total size 10819.30GB (prior to pooling and
   compression),
 * 284 incr backups of total size 913.00GB (prior to pooling and
   compression).

For comparison, I have a Xeon X3320, 8GB RAM and 16 Seagate ES.2 1 TB 
drives (ST31000340NS) on a Adaptec 51645 using a RAID6 setup 
(effectively 13 data drives, 2 parity, 1 hot spare).  Fulls every 7 
days, incremental every day.  I Started off keeping 4 weeks of fulls and 
two weeks of incrementals, and cleaning the whole pool nightly.  All 
(~125) backups completed within the 20:00-07:00 time frame and 
BackupPC_nightly finished in under 5 hours (with a concurrence of 8).  
Then I doubled retention on both fulls and incrementals and have been 
fighting steadily worsening performance problems since.  Dropping the 
retention back to initial levels is taking a while (BackupPC_trashClean 
seems to finally be gaining some ground).

* Pool is 587.09GB comprising 31187909 files and 4369 directories
  (as of 2009-03-11 14:46),
* Pool hashing gives 2294 repeated files with longest chain 944,
* Nightly cleanup removed 1648979 files of size 50.98GB (around
  2009-03-11 14:46),
* Pool file system was recently at 11% (2009-03-12 11:00), today's
  max is 11% (2009-03-12 07:00) and yesterday's max was 11%.

There are 125 hosts that have been backed up, for a total of:

* 495 full backups of total size 2384.27GB (prior to pooling and
  compression),
* 855 incr backups of total size 380.11GB (prior to pooling and
  compression).

Chris


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How scalable is backuppc?

2009-03-12 Thread Chris Robertson
Les Mikesell wrote:
 Chris Robertson wrote:
   
 I've heard about hardlinks which can be a problem if theire are millions of
 it. Is that true?
 
 The file system can become... interesting to fix or backup when you get 
 a few million hard links, especially if you're using XFS.
   
 Amen to that...
 

 But note that normal operation of backuppc is fairly efficient, doing 
 name lookups within a reasonable tree structure even in the common pool. 
 Only the nightly cleanup has to walk the whole directory, and if that is 
 a problem you can divide it up so it only does a portion each night.

   
 For comparison, I have a Xeon X3320, 8GB RAM and 16 Seagate ES.2 1 TB 
 drives (ST31000340NS) on a Adaptec 51645 using a RAID6 setup 
 (effectively 13 data drives, 2 parity, 1 hot spare).  Fulls every 7 
 days, incremental every day.  I Started off keeping 4 weeks of fulls and 
 two weeks of incrementals, and cleaning the whole pool nightly.  All 
 (~125) backups completed within the 20:00-07:00 time frame and 
 BackupPC_nightly finished in under 5 hours (with a concurrence of 8).  
 Then I doubled retention on both fulls and incrementals and have been 
 fighting steadily worsening performance problems since.  Dropping the 
 retention back to initial levels is taking a while (BackupPC_trashClean 
 seems to finally be gaining some ground).
 

 Since the files are created in one place, then linked to another 
 directory there isn't much the filesystem can do to optimize access 
 access and you end up seeking all over the place.
   

Hopefully my original message didn't come across as negative of either 
XFS or BackupPC.  Due to how well BackupPC and XFS handled the load I 
threw at it initially, I expanded the retention policy of my backups 
without thought, planning or proper research and testing.  As a result 
of this rash behavior, I'm experiencing some teething problems.

Chris


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a BackupPc host

2009-03-09 Thread Chris Robertson
Peter Walter wrote:
 All,

 I have implemented backuppc on a Linux server in my mixed OSX / Windows 
 / Linux environment for several months now, and I am very happy with the 
 results. For additional disaster recovery protection, I am considering 
 implementing an off-site backup of the backuppc server using rsync to 
 synchronize the backup pool to a remote server. However, I have heard 
 that in a previous release of backuppc, rsyncing to another server did 
 not work because backuppc kept changing the file and directory names in 
 the backup pool, leading the remote rsync server to having to 
 re-transfer the entire backup pool (because it thinks the renamed files 
 are new files).

 I have searched the wiki and the mailing list and can't find any 
 discussion of this topic.

http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg13367.html

  Can anyone confirm that the way backuppc 
 manages the files and directories in the backup pool would make it 
 difficult to rsync to another server, and, if so, can anyone suggest a 
 method for mirroring the backuppc server at an offsite backup machine?

 Regards,
 Peter

Chris

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Kernel Panic, BackupPC + ext3

2009-03-04 Thread Chris Robertson
Nate wrote:
 We seem to be routinely having this issue where the server backuppc 
 is running on throws a kernel panic and thus hard locks the 
 machine.  It's completely random, sometimes happens daily, sometimes 
 we can have a lucky 2-3 weeks without a lockup.  I've taken a 
 screenshot and posted it here:

 http://locu.net/misc/kernelp_backuppc.jpg

 This hardware has been in use for years without as much as a burp 
 before using backuppc, so I'm suspecting this could be an ext3 issue 
 with the multitudes of files and ext3's inability to handle 
 them?  Prior to using backup pc, we backed up the same data just in 
 flat .tgz files.

 System info:
 kernel: 2.6.18-92.1.22.el5 #1 SMP Tue Dec 16 11:57:43 EST 2008 x86_64 
 x86_64 x86_64 GNU/Linux
 distro: centos 5.2
 hw:  athlon 64 3200+, 4GB ram

 Any thoughts?
   

What kind of drive controller are you using?  What kind of drives?

Further searching finds a similar issue:  
http://bugs.centos.org/view.php?id=2321

 Thanks,
 Nathan

Chris

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Kernel Panic, BackupPC + ext3

2009-03-04 Thread Chris Robertson
Nate wrote:
 Yeah, I doubt very much it's a backuppc issue, sorry if I may have 
 implied that.  I'm fairly confident it's a ext3/driver issue.  But as 
 this popped up when we began using backuppc I suspect it may have to 
 do with the massive quantities of files and had hoped another 
 backuppc may have encountered and solved it.

 controller is onboard nvidia, using the sata_nv driver
 drives are all Seagate ST31500341AS 1.5TB drives
   

Hmmm...  http://techreport.com/discussions.x/15863

 drives are tethered by LVM to a 4.1TB single ext3 partition

 Seems that centos reported bug also is using LVM, perhaps a tie, but 
 it's ext3 that's crashing by the crash output.
   

It's likely the kernel trying to access the ext3 data and not getting a 
response.  What does fgrep frozen /var/log/messages show?

You might try disabling the write cache on the physical drives (hdparm 
-W0 /dev/sd{a,b,c), as some have reported that solves the stuttering 
at the cost of performance.

Chris

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] move a specific backup (share) from one pool to another pool

2009-02-24 Thread Chris Robertson
Tino Schwarze wrote:
 On Mon, Feb 23, 2009 at 04:58:32PM -0600, Les Mikesell wrote:
   
 Craig Barratt wrote:
 
 I don't think there is a way to transfer a single host's backups using
 BackupPC_tarPCCopy.
   
 What happens if you just copy a single host's backup tree without regard 
 to the cpool links (assuming you have space)?  Will subsequent runs put 
 links into cpool so the wasted space goes away when the copied runs are 
 expired or will future matching files maintain links to the copied ones?
 

Subsequent runs would put links of newly BackupPC_dump'ed files into the 
pool.  The copied files would be duplicates of the pool files, but would 
not be linked.  As the copied backups are expired wasted space will be 
recovered.


 You might have to run BackupPC_link afterwards - it should integrate the
 files into the pool... but I'm not sure.
   

In order for this to have any effect, you'd have to recreate the 
$datadir/$pc/NewFileList.$backupNumber file which tells BackupPC_link 
which files need to be linked.  Basically, in the case above, you'd need 
to traverse each copied $datadir/$pc/$backupNumber/ directory and for 
every file found write out...

$poolHashFileName $fileSize $fileName

...to $datadir/$pc/NewFileList.$backupNumber.  So if you copied in 10 
old backups, you'd have to traverse 10 directories and write out 10 
NewFileList files.

Unless you are short on disk space or are planning on keeping the 
relocated backups around forever, that is likely more trouble than it's 
worth.

 Tino.
   

Chris


--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] When will backuppc begin pooling?

2009-02-12 Thread Chris Robertson
Brian Woodworth wrote:
 Yes, I have complete backups.  My problem appears to be a known bug as 
 stated earlier in the thread.  Craig was kind enough to post a 
 solution, but the problem is I don't know how to go about following 
 his instructions.

 thanks for the response


First, try...

perldoc IO::Dirent

...to see if you have the Dirent perl module installed.  If you get 
something like...

No documentation found for IO::Dirent.

...your problem probably lies elsewhere. If you get a documentation 
page, hit q to exit and run...

locate Lib.pm

...to find all files with Lib.pm in their name on your system 
(hopefully your locatedb is up to date.  If not updatedb will probably 
update it).  You should find (at least) two...

/some/path/lib/BackupPC/Lib.pm
/some/path/lib/BackupPC/CGI/Lib.pm

...where /some/path is BackupPC's install path.  If you have more than 
two, don't continue.  Ask for further advise.  If you just have two, we 
want to modify the one NOT in the CGI directory*.  The following command 
(when run as root**, or the some other user authorized to modify the 
file) will update /some/path/lib/BackupPC/Lib.pm per Craig's 
instructions.  You should probably back up the file before running it...

sed -i -e 's/\$IODirentOk = 1;/$IODirentOk = 0;/' 
/some/path/lib/BackupPC/Lib.pm

...and you should probably consult with someone else to make sure I know 
what I'm talking about and am not about to break your system 
(inadvertently or otherwise). :o)

Chris

* The command I used won't do anything to the Lib.pm in the CGI 
directory, but sed is a very powerful tool and, as such, should be 
treated with the utmost respect.
** Running commands you get from the Internet as root is inadvisable.  
sudo is not really much better.

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Setting UP the CGI Interface

2009-02-10 Thread Chris Robertson
Odhiambo Washington wrote:
 On Fri, Feb 6, 2009 at 11:43 PM, Chris Robertson crobert...@gci.net 
 mailto:crobert...@gci.net wrote:

 Odhiambo Washington wrote:

  Surprisingly, I am not able to get the CGI interface to show up
 things
  as beautifully as I see in the screenshots on the website and I
 wonder
  what I am missing.
 
  My BackupPC_Admin is in /usr/local/www/apache22/cgi-bin/ and it is
  owned by backuppc:www.
  The images from BackupPC are in
 /usr/local/www/apache22/icons/BackupPC
 
  In config.pl, I have the following:
 

  $Conf{ServerHost} = 'gw.crownkenya.com
 http://gw.crownkenya.com http://gw.crownkenya.com';
  $Conf{CgiDir} = '/usr/local/www/apache22/cgi-bin';
  $Conf{CgiImageDir}=
 '/usr/local/www/apache22/icons/BackupPC';
  $Conf{CgiImageDirURL}  = '/BackupPC';
 
  Going to http://$Conf{ServerHost}/BackupPC/ displays all the images.
 
  I could have the images inside /usr/local/www/apache22/icons/
 but even
  that did not work!
 
  Is there someone who encountered such a problem and managed to
 solve it??

 When I surf to http://gw.crownkenya.com/cgi-bin/BackupPC_Admin, things
 look fine...  Are you trying a different URL?


 I am shocked. It looks good from another machine. Let me add some 
 authentication stuff and see if I can scratch it.
 Is this the same way you did it? Did you just add some .htaccess 
 inside your cgi-bin dir?

I actually went the ModPerl route.  But the basics are the same.  An 
.htaccess file in the directory should work, as should putting the 
authentication requirements in the main (or an included) Apache config file.


  
 -- 
 Best regards,
 Odhiambo WASHINGTON,
 Nairobi,KE
 +254733744121/+254722743223
 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
 The only time a woman really succeeds in changing a man is when he is 
 a baby.
  - Natalie Wood

Chris

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Setting UP the CGI Interface

2009-02-06 Thread Chris Robertson
Odhiambo Washington wrote:
 Hello list,

 I am new but I am kind of old hand.

Haven't seen you on the Squid-Users list in a while...  :o)

 Surprisingly, I am not able to get the CGI interface to show up things 
 as beautifully as I see in the screenshots on the website and I wonder 
 what I am missing.

 My BackupPC_Admin is in /usr/local/www/apache22/cgi-bin/ and it is 
 owned by backuppc:www.
 The images from BackupPC are in /usr/local/www/apache22/icons/BackupPC

 In config.pl, I have the following:

 $Conf{ServerHost} = 'gw.crownkenya.com http://gw.crownkenya.com';
 $Conf{CgiDir} = '/usr/local/www/apache22/cgi-bin';
 $Conf{CgiImageDir}= '/usr/local/www/apache22/icons/BackupPC';
 $Conf{CgiImageDirURL}  = '/BackupPC';

 Going to http://$Conf{ServerHost}/BackupPC/ displays all the images.

 I could have the images inside /usr/local/www/apache22/icons/ but even 
 that did not work!

 Is there someone who encountered such a problem and managed to solve it??

When I surf to http://gw.crownkenya.com/cgi-bin/BackupPC_Admin, things 
look fine...  Are you trying a different URL?



 -
 Best regards,
 Odhiambo WASHINGTON,
 Nairobi,KE
 +254733744121/+254722743223
 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
 The only time a woman really succeeds in changing a man is when he is 
 a baby.
  - Natalie Wood

Chris

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Server reboots nightly

2009-02-06 Thread Chris Robertson
Chris Baker wrote:
 Which logs should I check? 

/var/log/messages

 And what should I look for in these logs?

Run the command dmesg (first, man dmesg so you know what this command 
does) and take a look at the output.  This is the bootup message and 
should be replicated in /var/log/messages.  Take a look at the entries 
above for clues as to why your server is restarting.  The last log entry 
before the dmesg output should be something along the lines of Kernel 
log daemon terminating. and should be proceeded by evidence of exiting 
processes.  If not, you probably have power or hardware problems.  If 
you do see evidence of a graceful restart, at least you know about when 
it's happening.  Start looking through /etc/cron.* and /etc/crontab for 
jobs that run around that time.

  I do
 know where the logs are on the system.
   

Chris

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] host config file command line editor ?

2009-01-08 Thread Chris Robertson
Alex wrote:
 Hi there, 
 i'm not aware using Perl scripting, but ok with bash / php.
 Then, i'd like to know if there's a way to set values for per pc config file 
 using command line tool ?

 By exemple, like to change only FullKeepCnt value, but, has it is by default 
 written that way :
 $Conf#123;FullKeepCnt#125; = #91;
 nbsp; '400'
 #93;;
   

That's HTML for...

$Conf{FullKeepCnt} = [
 '400'
];

It should not be HTML encoded on disk.

$Conf is a hash.  The hash entry FullKeepCnt is a list (I think) with, 
in this case, one value: 400.

 It's a bit hard for me to parse this file and write back a new value.

PHP has the htmlspecialchars_decode 
(http://us.php.net/manual/en/function.htmlspecialchars-decode.php), 
which will take the nasty looking string you started with, and transform 
it to the proper on-disk setup, which should also be easier to 
manipulate (I think PHP might even properly utilize Perl hashes).

  

 Didn't find any way of doing this on the wiki. I'd like to be able to do that 
 kind of thing without de cgi web interface, in order for me to be able to 
 change values through scripts.

 Thank you in advance for any advices :)

Chris

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] problem with linux client

2008-12-23 Thread Chris Robertson
Anand Gupta wrote:
  Hi Les,

  Thanks for the link. I see BackupPC_tarCreate and BackupPC_zipCreate
  for tar and zip. Is there an rsync version ? So instead of creating a
  tar or zip, it can rsync the data over to another location ?

  The reason i asked is because the amount of data i am going to backup
  is 1T+ and thus creating a tar of the same would be unnecessary time
  consuming job, whereas infact i would only want to rsync the data
  across to another location.

  My main motive is incase i don't have access to a browser, i can
  still restore files using console. My backup server runs behind a
  firewall, i only have console access to the server.


You are probably best off tunneling HTTP through your SSH connection 
(assuming you don't really mean console access over a serial line), or 
installing the Lynx browser on the server, and using that.  If neither 
of those options work for some reason, I'll you can use wget from the 
server's command line.

The wrapping on this might get ugly (make sure the command is all on one 
line and that the post-data has NO spaces in it)...

/usr/bin/wget --http-user=${MYBACKUPPCLOGIN} 
--http-password=${MYBACKUPPCPASSWORD} 
--post-data='host=${HOSTTORESTORE}hostDest=${HOSTTORESTORE}shareDest=${SHARE}pathHdr=${PATH}num=${BACKUPNUMBER}type=4action=Restorefcb0=${FILE1}fcb1=${FILE2}fcbMax=144share=${SHARE}'
 
http://127.0.0.1/cgi-bin/BackupPC_Admin

Replace anything that looks like ${VARIABLE} with arguments that are 
reasonable for your install.
${SHARE} is the rsync share
${PATH} is the path within the rsync share
${BACKUPNUMBER} is the backup number to use (-1 for the latest, -2 for 
the one before that or you can explicitly specify it)
${FILE1} is the filename within ${SHARE} (I can't remember if it 
requires the full path or just the filename) to restore.  Increment the 
number after fcb for each file.  I think specifying a folder will 
recursively restore the folder and all files under it.
Hopefully the rest are self explanatory.

Basically this tells wget to POST the data in specified by the 
--post-data argument to the BackupPC_Admin script interpreted by the web 
server on localhost, and to use the HTTP credentials specified by 
http-user and http-password.  Just what your browser would do.

Yeah.  Using Lynx is a much better option.


  -- Thanks and Regards,

  Anand

Chris

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-23 Thread Chris Robertson
thomat...@gmail.com wrote:
 How dangerous is it to run xfs without write barriers?

http://oss.sgi.com/projects/xfs/faq.html#nulls

As long as your computer shuts down properly, sends a flush to the 
drives, and the drives manage to clear their on-board cache before power 
is removed or the chip set is reset, it's not dangerous at all.  :o)

Here's a thread from SGI's XFS mailing list from before XFS on Linux had 
barrier support:
http://oss.sgi.com/archives/xfs/2005-06/msg00149.html

Here's an informative thread on LKML with some good information:
http://lkml.org/lkml/2006/5/19/33
A analysis of the performance hit due to barriers (and a fairly vague 
suggestion on a solution) can be found at:
http://lkml.org/lkml/2006/5/22/278

The executive summary is that you can use xfs_db to change the log 
(journal) to version 2, which allows larger buffers, which reduces the 
impact the barriers have (fewer, larger log IOs, so fewer barriers).

Chris

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-19 Thread Chris Robertson
dan wrote:
 If the disk usage is the same as before the pool, the issue isnt 
 hardlinks not being maintained.  I am not convinced that XFS is an 
 ideal filesystem.  I'm sure it has it's merits, but I have lost data 
 on 3 filesystems ever, FAT*, XFS and NTFS.  I have never lost data on 
 reiserfs3 or ext2,3. 

 Additionally, I am not convinced that it performs any better than ext3 
 in real world workloads.  I have see many comparisons showing XFS 
 marginally faster in some operations, and much faster for file 
 deletions and a few other things, but these are all simulated 
 workloads and I have never seen a comparison running all of these 
 various operations in mixed operation.  how about mixing 100MB random 
 reads with 10MB sequential writes on small files and deleting 400 
 hardlinks?

 I say switch back to ext3.

Creating or resizing (you do a proper fsck before and after resizing, 
don't you?) an ext3 filesystem greater than about 50GB is painful.  The 
larger the filesystem, the more painful it gets.  Having to guess the 
number of inodes you are going to need at filesystem creation is a nice 
bonus.

EXT4, btrfs, or Tux3 can't get here (and stable!) fast enough.

Chris

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Chris Robertson
Thomas Smith wrote:
 Hi,

 No, it continues to take 22 hours or so each day.

 -Thomas

How is your XFS volume mounted?  Did you add the noatime and 
nodiratime directives?  If you have battery backed storage, I would 
highly recommend using nobarrier as well 
(http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent).

How full is the XFS partition?  Performance suffers greatly when the 
filesystem usage rises above about 80%.

Chris

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] setup difficulty

2008-12-15 Thread Chris Robertson
Lofton H Alley Jr wrote:
 Its been a small struggle with slow progress. I am stymied over this one 
 though.

 Here is the network layout: two desktops and a lappie on wifi. This 
 should be easy right? One deskie has an 80 GB primary and a 320G storage 
 HDD divided into 3 partitions. The other deskie has an 80 GB primary and 
 a 1TB storage partitioned into 4 parts, with the largest (500GB) for 
 Backup.
 Everything is running Ubuntu 8.10 and is pretty stable. The drives are 
 all networked with nfs file sharing so that the /home partition on the 
 80GB drives and all the partitions on the storage drives (except the 
 backup partition which I have set up as described below) are shared 
 through /media folders on the host deskie and then fstabbed to a 
 /home//whatever/ folder on the various systems. This brings them up on 
 the Desktop of each computer as a volume.

 The backuppc setup seemed pretty straightforward to me, but I 
 immediately found that I am lazier than I thought and had to fix some 
 stupid syntax stuff on setup which tired me out (do you know the 
 feeling?). After working through those issues I hit on some problems 
 with setting the /media/backup partition as the backuppc partition and 
 found out a way to fix that through the documentation. What I did was to 
 mount the partition in the default folder in the setup, which seemed to 
 resolve that problem on setup.

 What it is sticking on now is that it cannot setup the LOG file in the 
 default location. I'm upstairs now and that box is downstairs in my 
 daughter's room, so i can just say that i got the same error message 
 with the default folder and also when I changed it to my Ubuntu standard 
 /var/log folder. The error refers me to the perl script , line 1184 (as 
 I recall) which is , sure enough, some lines about the LOG files, STERR 
 and STOUT and the error message that I got. What the error message says 
 is that the program cannot set up the /var/log/LOG file and so must shut 
 down.

 Now, the /var folders are mostly owned by root and so there is not much 
 to do with that, I switched back to the /usr default, but it makes no 
 difference. It is my guess that there is another error that is calling 
 to be written to the LOG but it can't maybe because of the same error.

 anybody that can help would be thanked and appreciated. I am in China 
 and so I have a different schedule from other people and sometimes can't 
 reply in a timely fashion, but I will get this worked out , thanks(in 
 advance) for all your help
   

Settings from my /etc/BackupPC/config.pl file:

$Conf{BackupPCUser} = 'backuppc';
$Conf{TopDir} = '/data/BackupPC';
$Conf{ConfDir} = '/etc/BackupPC';
$Conf{LogDir} = '/var/log/BackupPC';

Permissions for the directory structure /var/log/BackupPC:

-bash-3.2$ ls -ld /var/
drwxr-xr-x 21 root root 4096 Dec 15 12:36 /var/
-bash-3.2$ ls -ld /var/log/
drwxr-xr-x 14 root root 4096 Dec 15 12:37 /var/log/
-bash-3.2$ ls -ld /var/log/BackupPC/
drwxr-x--- 2 backuppc backuppc 4096 Dec 15 13:42 /var/log/BackupPC/
-bash-3.2$ ls -l /var/log/BackupPC/
total 252
-r--r--r-- 1 backuppc backuppc 5 Dec 12 15:36 BackupPC.pid
srwxr-x--- 1 backuppc backuppc 0 Dec 12 15:36 BackupPC.sock
-rw-r- 1 backuppc backuppc 0 Oct 16 10:29 LOCK
-rw-r- 1 backuppc backuppc  3056 Dec 15 13:00 LOG
-rw-r- 1 backuppc backuppc  2188 Dec 15 07:00 LOG.0.z
-rw-r- 1 backuppc backuppc  2235 Dec  5 07:00 LOG.10.z
-rw-r- 1 backuppc backuppc  2765 Dec  4 07:00 LOG.11.z
-rw-r- 1 backuppc backuppc  2505 Dec  3 07:00 LOG.12.z
-rw-r- 1 backuppc backuppc  3262 Dec  2 07:00 LOG.13.z
-rw-r- 1 backuppc backuppc  1948 Dec 14 07:00 LOG.1.z
-rw-r- 1 backuppc backuppc  5396 Dec 13 07:00 LOG.2.z
-rw-r- 1 backuppc backuppc  4202 Dec 12 07:00 LOG.3.z
-rw-r- 1 backuppc backuppc  4028 Dec 11 07:00 LOG.4.z
-rw-r- 1 backuppc backuppc  5080 Dec 10 07:00 LOG.5.z
-rw-r- 1 backuppc backuppc  1534 Dec  9 07:00 LOG.6.z
-rw-r- 1 backuppc backuppc  2327 Dec  8 07:00 LOG.7.z
-rw-r- 1 backuppc backuppc  2982 Dec  7 07:00 LOG.8.z
-rw-r- 1 backuppc backuppc  2003 Dec  6 07:00 LOG.9.z
-rw-r- 1 backuppc backuppc 1 Dec 15 13:00 status.pl
-rw-r- 1 backuppc backuppc 44451 Dec 15 12:00 status.pl.old
-rw-r- 1 backuppc backuppc   226 Dec 15 03:03 UserEmailInfo.pl

Perhaps your permissions are different?  Otherwise, are you using 
AppArmor or SELinux?  If you su to the BackupPCUser, can you touch the 
log file?

Chris


--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:

Re: [BackupPC-users] Native Window version of rsync

2008-11-20 Thread Chris Robertson
dtktvu wrote:
 Hmm, I see...

 Thanks very much for pointing that out.

 BTW, what do you mean by You might run into some other problems if using GPL 
 code in a C# environment if you are using shared libraries that are not GPL? 
   

If you are statically linking libraries that do not use a GPL compatible 
license, you may not be able to legally distribute your code.  The GPL 
says is that if you make a derived work from a project that is GPL 
licensed which incorporates components that are not GPL'd, you can't 
distribute the derived work. It doesn't (and legally can't) force the 
GPL onto someone else's code.

 Thanks.

You might also want to see http://www.gnu.org/cgi-bin/license-quiz.cgi.

Chris

P.S. Standard disclaimers apply.  I am not a lawyer.  This is not legal 
advise.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] moving a volume

2008-11-18 Thread Chris Robertson
Ray Todd Stevens wrote:
 We have a backuppc system setup that has been running for a while now.   We 
 are 
 expanding the office and I am going to need more storage space.  To do this I 
 will need to 
 copy the data off, reconfigure the array with more drives and then reload the 
 system.

 How is the best way to do this?

See Copying the pool under the Other Installation Topics header of 
the FAQ.

http://backuppc.sourceforge.net/faq/BackupPC.html#other_installation_topics

Chris

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Child Exited Prematurely

2008-11-18 Thread Chris Robertson
James Sefton wrote:

 Hi,

 Please excuse me if I am using this wrong, in all my years in IT, it 
 seems this is the first time I have used a mailing list for support. 
 (I’m usually pretty good at the whole RTFM thing)

 We have a backup box (FC6) that is running backups from a lot of 
 windows servers using rSync.

 We are not running the latest version of BackupPC. I am reluctant to 
 update this unless I have a good idea that it’s going to help since we 
 have a lot of automated scripts that manage the BackupPC config files 
 and we will need to review them all. If this is the route we need to 
 go then no problem but as I understand it, my problem is specifically 
 related to rSync. (please correct me if I am wrong.)

 We have been running this for well over a year now (maybe a few years, 
 my memory fails me) and iirc this problem has only started showing up 
 over the past 6-12 months. We do not have any automatic updates on the 
 BackupPC box so nothing there should have really changed.

 Out backup box resides on what we call our CORE network. The servers 
 it backs up are all on remote network which are connected to the CORE 
 network with VPN’s. (VPN’s are running over DSL) Backups do take a 
 long time to run (~10 hours or so) due to the amount of data.

 The problem we are seeing is that Backups are randomly failing.

 The log file on BackupPC showing something like this:

 Connected to xxx.xxx.xxx.xxx:873, remote version 29

 Negotiated protocol version 26

 Connected to module kale-susl

 Sending args: --server --sender --numeric-ids --perms --owner --group 
 -D --links --times --block-size=2048 --recursive --ignore-times . .

 Xfer PIDs are now 5220

 [ skipped 971 lines ]

 Read EOF: Connection reset by peer


I saw a similar problem from time to time due to firewalls that close 
inactive connections. RSync can sit a while without passing data as 
file lists are created, and files are compared. See 
https://bugzilla.samba.org/show_bug.cgi?id=5695 for a related issue. 
Check your VPN setup for inactive session timeout or the like. SSH has 
the ServerAliveInterval option that can mitigate this.

 Tried again: got 0 bytes

 finish: removing in-process file Data/Apps/GoldMine/GMBase/ScriptsW.MDX

 Child is aborting

 Parent read EOF from child: fatal error!

 Done: 923 files, 3041217353 bytes

 Got fatal error during xfer (Child exited prematurely)

 Backup aborted (Child exited prematurely)

 The log on the windows server is:

 2008/11/18 17:46:05 [3252] connect from UNKNOWN (xxx.xxx.xxx.xxx)
 2008/11/18 17:46:05 [3252] rsync on . from [EMAIL PROTECTED] (xxx.xxx.xxx.xxx)
 2008/11/18 17:46:05 [3252] building file list
 2008/11/18 18:03:14 [3252] rsync: writefd_unbuffered failed to write 4092 
 bytes [sender]: Connection reset by peer (104)
 2008/11/18 18:03:14 [3252] rsync error: error in rsync protocol data stream 
 (code 12) at /home/lapo/packaging/tmp/rsync-2.6.9/io.c(1122) [sender=2.6.9]

 I have been trying to work this out for month or two now.

 The problems seem to be random, but more common on specific servers.

 There is nothing special about these specific servers – they seem just 
 random but persistant.

 Originally, we were running the recommended rSync package for BackupPC.

 After looking into the problem over the past month, I have seen a lot 
 of posts suggesting there this was a common problem with a particular 
 build of rSync.

 I have updated rSync on the backupPC box, “rpm –q rsync” currently 
 replies...

 rsync-2.6.9-5.fc8 (yes, the only updated rpm i could find was an fc8 one)


That's the danger of using Fedora Core for a server. The support just 
doesn't last. SUSE, Ubuntu and CentOS are much better server OS choices.

You might try grabbing the SRPM from Fedora 9 
(http://download.fedora.redhat.com/pub/fedora/linux/updates/9/SRPMS.newkey/rsync-3.0.4-0.fc9.src.rpm),
 
install rpm-build and see if you can roll your own.

 On a few select servers (including the one that generated the above 
 logs) I setup cygwin directly and added rSync to it with the installer 
 wizard.

 I selected rSync 2.6.9 rather than 3.x.x as i assumed this would b 
 required for compatibility.


Nope. RSync is backwards compatible.

 These seem to be the only recommendations I can find for fixing this 
 problem. (updating rSync)

 Sadly, it has not helped me so far.

 The connection between the BackupPC server and the example server used 
 for the above logs is VPN like the rest of the servers but this server 
 is local and the VPN operates over local Ethernet links. (ie. Stable 
 links.)

 I have tried and tried to verify as much as I can that there are no 
 network/VPN dropouts at the times that this is failing and im pretty 
 sure there are not. It sometimes fails within 3 minutes of the job 
 starting, other times after hours. I know I have had a remote desktop 
 session open to the server and been actively using it at the time it 
 failed and I noticed absolutely no disturbance in my RD 

Re: [BackupPC-users] Questions about compression in BackupPC

2008-11-05 Thread Chris Robertson
John Goerzen wrote:
 Hi everyone,

 I installed BackupPC to try it out for backing up Linux systems, and I
 have a few questions about it.

 First, the on-disk compression format makes me nervous.  It appears to
 use the deflate algorithm, but cannot be unpacked with either gzip or
 unzip.  It would seem that the few bytes that adding a gzip header
 means would be well worth it, since it would buy the ability to
 extract it without using specialized tools.  It also makes me nervous
 because it isn't a completely off-the-shelf implementation, and
 doesn't appear to store a CRC in the file; is there integrity checking
 anywhere?
   

http://backuppc.sourceforge.net/faq/BackupPC.html#compressed_file_format

and

http://backuppc.sourceforge.net/faq/BackupPC.html#backuppc_operation

 Secondly, I would love to be able to use bzip2 for the on-disk
 compression of each backup.  It appears that bzip2 can be used for
 archives, but not the regular backups.  Is this in the works anywhere?

 Third, I'm wondering how well BackupPC deals with sparse files.  I
 notice that the examples for tar are not giving -S to detect sparse
 files.  Does BackupPC store sparse files efficiently, even if not
 using compression?  On restoration, what will it do with sparse files
 -- will it re-create the holes?
   

See 
http://backuppc.sourceforge.net/faq/BackupPC.html#how_to_backup_a_client.  
Specifically $Conf{TarClientCmd} or $Conf{RsyncClientCmd}.  Adjust 
the tar (or rsync) command to your preference.

 Thanks,

 -- John

Chris

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup through slow line?

2008-09-02 Thread Chris Robertson
Les Mikesell wrote:
 Christian Völker wrote:
   
 | If the full backup fails, does it start from scratch every time or are
 | some files already stored in the backup and used during the next try, so
 | it'll finish some day?
 | If you are using rsync as the transfer method it will continue
 | approximately where it stopped.
 Hmmm...may I check in some way how many data has been backed up? Even if
 it was a failed full backup?
 I remember it took me several days to initialize the rsnapshot backup.
 But when I check the rsync open files on the server it always stays in
 the same folder...
 

 The web interface should show a 'partial' until the backup completes. I 
 don't have one in that state so I'm not sure how much other info you 
 get.  

A partial backup shows everything that a completed full or incremental 
does (#Files, Size/MB for existing and new as well as MB/sec).  It's 
browseable just like a full or incremental.  I'm not sure if you get a 
history, as a more complete partial replaces a less complete one.

Chris

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] archive problem

2008-08-01 Thread Chris Robertson
Jeff Rippy wrote:
 yes I had thought of that too and have already added the backuppc user 
 to the tape group.  permissions are 660 or rw-rw with owner root 
 and group tape.  Also the backuppc documentation and even the default 
 configuration uses /dev/st0 so why exactly do you recommend /dev/nst0 
 instead.  Shouldn't backuppc be controlling where it writes to the tape?
 I think the problem has something to do with what Holger was saying, 
 for some reason the script is trying to create /dev/st0 even though 
 its already there.
 Here is an excerpt from the script that I think is relevant but I 
 haven't been able to find information on the sh or dash or bash 
 (whichever it is) switches that are being used here:

The are file test operators.  See 
http://tldp.org/LDP/abs/html/fto.html for a pretty extensive (and 
annotated) list.

 (Lines 107-129 from /usr/share/backuppc/bin/BackupPC_archiveHost - 
 specifically look at 109 where it checks for something (again not sure 
 what the switches do) and 116 where it tries to create the output 
 location)

 107 my $cmd = $tarCreate -t -h $host -n $bkupNum -s $share . ;
 108 $cmd   .= | $compPath  if ( $compPath ne cat  $compPath 
 ne  );
 109 if ( -b $outLoc || -c $outLoc || -f $outLoc ) {

If the file identified by the variable $outLoc is a regular file, OR a 
block device, OR a character device, as noted in the comment directly 
following...

 110 #
 111 # Output file is a device or a regular file, so don't use 
 split
 112 #

Chris

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why should I use root to connect to host

2008-07-29 Thread Chris Robertson
brunal wrote:
 Hi,

 One question that I dont understand : can I use another user than  
 root to connect to my host?
   

The only reason to connect as root is to make sure you have access to 
all the files you want to back up.

 And to my host side, a user backuppc exist and have access to all the  
 necessary folder and file. Should this configuration work?
   

Since you have this condition met, you should be fine.


 thanks so much for your help,

 Best regards,

 Bruno.

Chris

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Howto backup BackupPC running on a RAID1 with mdadm for offline-storage

2008-07-15 Thread Chris Robertson
Kurt Tunkko wrote:
 Hello Holker,

 Holger Parplies wrote:

   
 3. Mirror partition tables from one of the existing disks:

 # sudo sfdisk -d /dev/sda | sfdisk /dev/sdc
   
 apart from something having been mangled (???), I tend to wonder why you
 need root permission to read the partition table of /dev/sda but not to write
 it to /dev/sdc ;-). You might, of course, have relaxed the permissions on
 /dev/sdc, but I'd spare an extra 'sudo' for the howto ;-).
 

 I took the information above from the page:

 Setting up software RAID in Ubuntu Server
 http://advosys.ca/viewpoints/2007/04/setting-up-software-raid-in-ubuntu-server/

  Once system as been rebooted with the new unformatted replacement 
 drive in place, some manual intervention is required to partition the 
 drive and add it to the RAID array.
 The new drive must have an identical (or nearly identical) partition 
 table to the other. You can use fdisk to manually create a partition 
 table on the new drive identical to the table of the other, or if both 
 drives are identical you can use the “sfdisk” command to duplicate the 
 partition. For example, to copy the partition table from the second 
 drive “sdb” onto the first drive “sda”, the sfdisk command is as follows:

 sfdisk –d /dev/sdb | sfdisk /dev/sda

 I don't know if it's possible to add a 3rd drive to the RAID, that 
 hasn't got the right partitions on it :-?

   
 I believe the original idea is *not* to temporarily cripple your RAID but
 rather to add a third disk (three way mirror).
 

 you're right I changed my setup and have now a three way raid, so that I 
 can unplugg one drive and keep it as offline backup.

   I'm not sure if you can do that after initial creation of the array,
   but the man page suggests it should be possible on kernels which
   provide necessary support.

 On Ubuntu Server I was able to add a third harddisk to the array and get 
 it synced. After sync had been completed I can remove the 3rd drive from 
 the array and lock it away for offline storage.

Change a 2-disk RAID1 to a 3-disk RAID:
# sudo mdadm --grow --raid-devices=3 /dev/md0

Add a 3rd drive to the existing 2disk RAID1:
# sudo mdadm --add /dev/md0 /dev/sdc1
- spare will be rebuilded

Remove 3rd disk from RAID
# sudo mdadm --fail /dev/md0 /dev/sdb1
# sudo mdadm --remove /dev/md0 /dev/sdb1

 While this approach is much better than the one I suggested yesterday it 
 still leads to some Questions:

 1) Is there a way to add the 3rd drive to RAID1 as soon as it will be 
 connected to the system (External harddrive that is oonnected via usb2)?
 Or more generally: Can I run a script when an external storage device is 
 connected via usb?
   

Yes.  http://ubuntuforums.org/showthread.php?t=502864

 2) Do I need to resize my RAID after removing the 3rd harddrive with
 # sudo mdadm --grow --raid-devices=2 /dev/md0
   

Personally, I wouldn't bother, unless it made my monitoring software 
throw a false error.

 Are there any problems when the RAID will be used in clean, but 
 degraded state?
   

I doubt it.  The mirror is going to be active with two drives, so writes 
might even be faster.

 - Kurt
   

Chris

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups are taking longer and longer to start.

2008-06-25 Thread Chris Robertson
Bruno Faria wrote:
 Hi,

 I have setup backupPC to run 3 backups at the same time, and this was 
 working very well since all host were pretty much getting a backup 
 once a day. But for some reason, now the backups are not starting as 
 often as they were. Sometimes all have running is just 
 BackupPC_nightly and it will take hours and hours for another backup 
 to start even though there are multiple pending backup requests. Now I 
 have hosts that haven't being backed up for 5 days now, and most of 
 the other backups I have been starting manually. What's also weird is 
 that I never see more than two backups running at the same time, even 
 though BackupPC is supposed to backup 3 hosts at a time.

 What could be going on? And what I need to fix this?

 Thanks in advance!

BackupPC_nightly (which deletes files from the pool that are no longer 
needed) and BackupPC_link (which links duplicate files to the pool) 
can't run at the same time.  If you have a number of BackupPC_link 
processes waiting to run (configurable via the MaxPendingCmds 
option),  no more backups will start.

How long does your BackupPC_nightly take to run?  Perhaps you would be 
better off setting BackupPCNightlyPeriod  1.

Chris


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Consistent (and frustrating!) 'child exited prematurely' errors

2008-06-19 Thread Chris Robertson
Leandro Tracchia wrote:
 i didn't think to check this log before... it has interesting entries.
 this is the rsyncd log from the windows server. it has errors right
 around the same time the backuppc log shows the 'child exited
 prematurely' error. most of the errors complain about the file names
 being too long. but the last two entries at 23:49 make no sense to me.

 could long file names really by causing the timeouts??
 what do the last two lines mean??

 2008/06/18 20:03:31 [2952] connect from PANDION (192.168.2.1)
 2008/06/18 20:03:36 [2952] rsync on . from [EMAIL PROTECTED] (192.168.2.1)
 2008/06/18 20:06:56 [2952] rsync: readlink
 ive/c/WIND-uild/Word Files/1MA1411 Site Description Edited
 11-12-07/Figure 4 - Site 1MA1411 after Clearing and Raking. Note the
 toppled stacked stone footer in the foreground. View is to the
 north..JPG (in A-F) failed: File name too long (91)

   

SNIP

 2008/06/18 20:12:47 [2952] rsync: readlink
 ive/c/WINDOWS/system32/E:\A-F/RedSt--st by Site/Artifacts By
 Project/RSA99-2000 Jpegs/Original Archive/UNknown Unprocessed/AOA
 RSA99 Pics (Unlabeled)/1MA096,PI---r-Parallel Oblique Flaking.jpg
 (in A-F) failed: File name too long (91)
 2008/06/18 23:49:54 [2952] rsync: writefd_unbuffered failed to write
 4092 bytes [sender]: Connection reset by peer (104)
   

Transient network issue or a firewall.  The Client is claiming that the 
BackupPC server sent a TCP reset.  The three and a half hour gap between 
log entries is interesting.  Did the backup not encounter any more long 
filenames in that time, or did it get hung up on something...  More 
verbose logging might help determine that.

 2008/06/18 23:49:54 [2952] rsync error: error in rsync protocol data
 stream (code 12) at io.c(1119) [sender=2.6.8]

This message is just an indication that the data stream was interrupted 
in the middle.

Chris

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No CGI Interface?

2008-06-04 Thread Chris Robertson
Johnny Stork wrote:
 I just installed BPC on a RHEL4 machine, but when trying to access the 
 gui at http://serverip/cgi-bin//BackupPC_Admin I just see the 
 text/contents of the perl script? No gui?

 Any suggestions?

Here are the steps I used to get the CGI interface working on a fresh 
(minimal) CentOS 4.6 install:

* Install httpd-2.0.52, apr-0.9.4, apr-util-0.9.4 and http-suexec-2.0.52 
RPMs (the last three for dependancies).
* When running configure.pl (from the extracted BackupPC install files) 
specify apache as BackupPC should run as user, /var/www/cgi-bin as the 
CGI bin directory, /var/www/html/BackupPC as the Apache image directory 
and /BackupPC as the URL for image directory.
* (Optional)  After install completes, run the following command as root:
cp /path/to/install/files/init.d/linux-backuppc /etc/init.d/backuppc  
chkconfig backuppc on  chown apache /data  service backuppc start

Surf to http://backupserver/cgi-bin/BackupPC_Admin.  At this point you 
should not have admin access (since you haven't been prompted for 
authentication), but at least the CGI should run.

If your setup deviated from this, please elaborate.

Chris



-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Question about link pending.

2008-06-02 Thread Chris Robertson
Bruno Faria wrote:
 Hi,

 I've posted a similar a question before but since I didn't get any response
 the first time, I guess I'll try again . :)
   

You might be edified by reading the documentation 
(http://backuppc.sourceforge.net/faq/BackupPC.html).  I highly recommend 
the section detailing how the software works 
(http://backuppc.sourceforge.net/faq/BackupPC.html#backuppc_operation).  
If you are not the document reading type, I have excerpted some 
pertinent details below...

 For some reason I'm getting alot of link pending. During the first backups
 I didn't have any problem with link pending since once a backup was done it
 would be done without any link pending. But whenever BackupPC_Nightly
 started, all the backups are getting link pending for hours and hours
 (about 7+ hours of link pending for each host).

 So here are my questions:

 1) Is it normal for links to be pending this long?
   

 From the document section on $Conf{MaxBackupPCNightlyJobs} (under 
http://backuppc.sourceforge.net/faq/BackupPC.html#general_server_configuration):

Each night, at the first wakeup listed in $Conf{WakeupSchedule} 
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_wakeupschedule_, 
BackupPC_nightly is run. Its job is to remove unneeded files in the 
pool, ie: files that only have one link. To avoid race conditions, 
BackupPC_nightly and BackupPC_link cannot run at the same time.

 2) Would it be bad if I just increase the max amount of link pending on
 the config.pl file to allow about 20+ pending links?
   

Personally, I have $Conf{MaxPendingCmds} = 50;  My nightly job* (which 
removes between 6 and 9 GB of data from  a pool size of about 240GB on a 
pair of software mirrored SCSI drives) is currently taking between 9 and 
14 hours.  I get close to 30 link jobs queued, and they all clear out 
less than an hour after the nightly job finishes.  Since I run all my 
backups at night (blackout period of 07:00 to 19:30) I'll probably wind 
up making the nightly job run in the morning.

 3) What's the job of link pending?
   

Also from the docs :

BackupPC_link reads the NewFileList written by BackupPC_dump and 
inspects each new file in the backup. It re-checks if there is a 
matching file in the pool (another BackupPC_link could have added the 
file since BackupPC_dump checked). If so, the file is removed and 
replaced by a hard link to the existing file. If the file is new, a hard 
link to the file is made in the pool area, so that this file is 
available for checking against each new file and new backup.

 Thanks for help in advance!

 --Bruno

Hope this helps.

Chris

* Using the term job loosely here, as I personally have 
$Conf{MaxBackupPCNightlyJobs} = 8.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Transfert BackupPC to an other machine (How ?)

2008-05-07 Thread Chris Robertson
Sam Przyswa wrote:
 Hi,

 We have to change our BackupPC server to a new machine, how to copy the
 entire BackupPC directory (120Gb) to an other machine ?

 I tried rsync, it crash after a long, long time, I tried scp but it
 don't pass the link and the dest directory become out of size after
 transferring about 50% of files...

 What is the right way to transfert a BackupPC with 120Gb of files ?
   

Using dd and netcat.

http://www.novell.com/coolsolutions/feature/19486.html

Run it over a ssh tunnel if you can't trust the network.

 Thanks in advance for your help.

 Sam.
   

Chris

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   >