[BackupPC-users] cpool corrupted ...help needed..
Hi, I went into a filesystem problem on my cpool. I checked my fs and restarted doing backups and restores regularly. My problem now is that backuppc status page says that my pool is 100 gb large but df (/var/lib/backuppc is on a dedicated filesystem) says 300 gb, and about 270 gb are in the cpool directory (I enabled compression of the pool). The host summary page says that the backups are of total size 280 GB prior to pooling and compression.. The cpool seems populated of files that are lost and unreferenced.. (and not removed my nightly processes). Another idea is that during the fsck, some hardlink has been transformed to a whole file.. (ext3 strangeness?) Is there a way to : -understand why deduplication seems not working if I read df and host summary page, but seems working if I look at status page? (or, how the status page and the summary page calculate the sizes?) -how to remove sporious file from the pool. One way could be to do a stat on each cpool file, and remove the file if it was created or accessed before the oldest backup I made.. but this seems too greedy.. and will put my hardware to the knees for too much time. Also I done a search for files that aren't owned by bakcuppc (spurious result of an fsck). thanks a lot -- ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] NAS / SAN and other storage devices
On Wed, 2009-03-18 at 08:37 -0400, yodo64 wrote: Hi all I am wondering if it is possible to store the backup datas, on non resident devices disks, with Backuppc ? Can I have a NAS or a disk in network wich is not inside the backuppc server ? IF YES where are the informations about those possibilities ? If NOT how do I do when all the possibilities of insides disks are full ? Do I have to add a second backuppc server ? you can use any device that is a block device .. for the secondo question, I think that the best solution is to use lvm/evms or what ever logical volume manager you like... -- Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are powering Web 2.0 with engaging, cross-platform capabilities. Quickly and easily build your RIAs with Flex Builder, the Eclipse(TM)based development software that enables intelligent coding and step-through debugging. Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] cloning the pool
On Wed, 2009-03-18 at 18:43 +0100, Matteo Sgalaberni wrote: Does it exist best practice to do this? Hi, search for previous posts on the list for using rsync for doing this (or backuppc_tarpccopy maybe?).. stupid solution (depending on the size of your array..): dd and then rm ?? (kiss-logic) -- Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are powering Web 2.0 with engaging, cross-platform capabilities. Quickly and easily build your RIAs with Flex Builder, the Eclipse(TM)based development software that enables intelligent coding and step-through debugging. Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] logging..
Hi all is there a wey to make backuppc log through syslog and not to his regular file(s)? thanks -- Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are powering Web 2.0 with engaging, cross-platform capabilities. Quickly and easily build your RIAs with Flex Builder, the Eclipse(TM)based development software that enables intelligent coding and step-through debugging. Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] how to backup the backuppc server (off site)
Hi all what about these approaches: - use lvm for backuppc pool, and schedule some kind of incremental dd of the device by taking a snapshot of the lvm volume - use coda or other replicating filesystem on the backuppc pool? could be? - use some dedicated storage that to hardware replication (like datadomain) any opinion is very appreciated.. On Tue, 2009-02-10 at 09:58 -0600, Carl Wilhelm Soderstrom wrote: On 02/10 09:36 , scurry7 wrote: Hello all, I have been looking for some documentation on how to backup my backup-PC server. What on my backup server needs to be duplicated so I can restore my backups once the server is back up? Sadly, there is not yet a way to replicate a BackupPC server in an easy and scaleable fashion. Searching the mailing list archives will find you some solutions; but all are a bit kludgy, IMHO, and will require varying degrees of research/clue/manual intervention. The best and simplest way is to have your second backup server do backups directly from the clients on its own. This also adds some redundancy to the backup process, which has saved my backups on a few occasions. -- Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are powering Web 2.0 with engaging, cross-platform capabilities. Quickly and easily build your RIAs with Flex Builder, the Eclipse(TM)based development software that enables intelligent coding and step-through debugging. Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] checking the pool
Hi is there a way to check for corrupted files in backuppc's pools? If my bakcuppc server hangs or gets reseted or other bad things.. at next boot it does an fsck of the filesystems and, suppose, fsck puts some file in lost+found.. Is there a way to discover wich backups are to be considered corrupted? Or wich one has lost some files? thanks in advance -- Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] user information customization
Hi Is there a way to show failed backups only to users who administer the host that failed? I manage users and hosts as documented with the backuppc hosts file like this: serverA 0 backuppc userA serverb 0 backuppc userB so userA and userB has different login credentials. If serverA misses a backup, userB (every user) sees the failure in the status page of backuppc in the failure that need attention table. Is there a way to get userB notified only of failures of server for wich he is a valid user? (or there is a little patch for this?) thanks in advance -- This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] incremental tar xfer errors
Hi Yes i'm sure... r...@backup2:/etc/backuppc# cat config.pl |grep $Conf{TarIncrArgs} $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+'; r...@stars:/etc/backuppc# cat ./*.pl | grep $Conf{TarIncrArgs} $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+'; I'm using debian and ubuntu and I got the same problem in both ... On Wed, 2009-01-07 at 22:42 -0800, Craig Barratt wrote: Simone writes: I got a strange problem doing incrementals with tar over ssh using --newer=$incrDate+. It seems an escape problem of part of the time reference for the incremental. Yes, the escaping isn't happening. The $incrDate+ form means to escape the value, so that is what you should use (since you are running through ssh). Are you sure $Conf{TarIncrArgs} includes --newer=$incrDate+ rather than --newer=$incrDate? Have you checked the per-client config too? Craig -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] incremental tar xfer errors
On Wed, 2009-01-07 at 22:42 -0800, Craig Barratt wrote: Yes, the escaping isn't happening. The $incrDate+ form means to escape the value, so that is what you should use (since you are running through ssh). the problem is: - a configuration mistake (if so, where may I find it out?) - a software bug (maybe in ubuntu/debian packaging?) thanks -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] incremental tar xfer errors
Hi I got a strange problem doing incrementals with tar over ssh using --newer=$incrDate+. It seems an escape problem of part of the time reference for the incremental. The date part of --newer is parsed correctly but the hour part of --newer.. doesn't and is changed in 00:00:00 and tar interprets 11:34:33 as a filename that doesn't exist. extract from logfile Running: /usr/bin/ssh -c blowfish -C -q -x -n -l backup $host $tarPath -c -v -v -f - -C /root --totals --newer=2009-01-07 11:34:33 . incr backup started back to 2009-01-07 11:34:33 (backup #0) for directory /root Xfer PIDs are now 25928,25927 /bin/tar: Treating date `2009-01-07' as 2009-01-07 00:00:00 + 0 nanoseconds /bin/tar: 11\:34\:33: Cannot stat: No such file or directory The problem disappears if I use mtime and not incrDate+ (or rsync) in the incremental options of tar. What I'm wrong? Thanks in advance. -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] file extraction, windows
Hi all When I extract some data from backuppc on a windows host the extraction stops at 2 GB. This happens either when I use the archive function either when I recover with/without compression. This happens only if working on Windows even if the FS is ntfs. Is there a solution for this problem? The size of 2 GB is so suspect... I think that this is a Windows xp gift. - This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Small patch to graph the pool size (v2 patch)
Hi all can this patch be modified for showing the percentage of cpool space sorted by host in some kind of stacked graph? I thing that cound be usefull to know how much of the pool storage is used by host A and how much by host B and so on. I'm not interested in the size of the backup for each host, but in the percentage of the cpool used by each host. thanks a lot! On Fri, 2008-02-29 at 09:51 +0100, Ludovic Drolez wrote: rOn Thu, Feb 28, 2008 at 10:17:43AM -0700, Kimball Larsen wrote: but the images appear as busted images on the status page. (With the patch :-D ) Hi ! This new patch should fix this bug. Anyway the graphs will appear after backuppc nightly has run. I've also fixed another problem, which comes from the fact that I assumed that the CGI was index.cgi (only true for Debian users ?). Cheers, - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/ - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] feature request
Hi all I got a feature request that I would submit. I think that could be interesting the possibility to spool remotely some backup of some host. I'm thinking of something as a property of a backup to be sent through ssh (scp or rsync) to another instance backuppc somewhere in a remote place. It could be great to manage the backups of the lan and of the server locally, and schedule for an offsite copy of some machine (the servers for example) or some critical data. If my local backuppc goes away (fire, damage or other special event), I go to the remote backuppc and I find my last remoted backup (full or incremental or what ever). something similar to rsync two compressed pools with each of backuppc not running... but done inside backuppc in a specific cicle as cleaning and linking. Could be? thanks - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/