Kelly Sauke wrote at about 09:21:28 -0500 on Friday, July 1, 2011:
BackupPC folks,
I have a need to modify certain files from backups that I have in
BackupPC. My pool is compressed and I've found I can decompress single
files using BackupPC_zcat. I can then modify those files as
Holger Parplies wrote at about 19:31:14 +0200 on Sunday, July 3, 2011:
While the comments in config.pl state
# This can be set to a string, an array of strings, or, in the case
# of multiple shares, a hash of strings or arrays.
that is actually incorrect. A hash of strings makes no
Holger Parplies wrote at about 20:10:20 +0200 on Sunday, July 3, 2011:
Hi,
Kelly Sauke wrote on 2011-07-01 09:21:28 -0500 [[BackupPC-users]
Recompressing individual files in pool]:
I have a need to modify certain files from backups that I have in
BackupPC. My pool is compressed
C. Ronoz wrote at about 16:31:33 +0200 on Wednesday, June 29, 2011:
What filesystem should I use? It seems ext4 and reiserfs are the only viable
options. I just hate the slowness of ext3 for rm -rf hardlink jobs, while
xfs and btrfs seem to be very unstable.
- How stable is XFS?
-
Holger Parplies wrote at about 03:45:21 +0200 on Wednesday, June 8, 2011:
Hi,
Gene Cooper wrote on 2011-06-07 16:28:01 -0700 [[BackupPC-users] Restore
Files Newer Than Date]:
[...]
I had a server fail today, but there was a full backup done last night.
It's many gigabytes over
Jim Wilcoxson wrote at about 14:10:31 + on Thursday, June 9, 2011:
Boniforti Flavio flavio at piramide.ch writes:
Hello to both of you, Adam and Andrew.
Great suggestion, backing up the VM's as if they were normal
clients...
That's an option I can't afford to
Andrew Schulman wrote at about 09:31:14 -0400 on Tuesday, June 7, 2011:
If they don't change between runs, backuppc will pool the new instance
with the previous, although a full backup may still take a long time as
the block checksum verification is done over the whole file. If they do
Steven Johnson wrote at about 14:45:09 -0400 on Tuesday, June 7, 2011:
Greetings what's the best version of Rsync.exe to use on Windows machine
with Cygwin. I've been using version 2.6.8 but experience very slow transfer
rates and often get the file has vanished:.. error in my xfer logs.
Sean Boran wrote at about 09:59:30 +0200 on Friday, June 3, 2011:
*Each* morning my root disk free space drops by one 1GB, starting at 07:00
linearly until 09:00.
At 09:00, the 1GB is suddenly freed up again.
...and I'm trying to figure out why.
- The server is dedicated for backuppc.
Nope - hard links are the essence of BackupPC
Scott wrote at about 16:20:37 -0400 on Tuesday, May 31, 2011:
Is it possible for backuppc to use symbolic or soft links instead of hard
links?
I found a seemingly great software FlexRaid which allows me to create a
software parity raid using
exarkun wrote at about 06:21:37 -0700 on Tuesday, May 24, 2011:
Can you give an example of how to run your BackupPC_copyPCPool.pl script?
No matter what I do, I get the error:
Backup './pc' does not contain a 'backupInfo' file
I have tried (from /var/lib/BackupPC):
Michael Stowe wrote at about 08:25:35 -0500 on Tuesday, May 24, 2011:
I did a relatively short filesystem comparison when I moved my BackupPC
pool to another set of drives. The high level results:
jfs, xfs: quick, stable
reiserfs: not stable
ext4: slow
ext3: very
Ahhh that would explain...
I have added the routine to the Wikki - hopefully that will avoid some
of the whitespace issues
https://sourceforge.net/apps/mediawiki/backuppc/index.php?title=BackupPC_CopyPcPool
Hopefully that helps...
Michael Stowe wrote at about 11:19:51 -0500 on Tuesday, May 24,
Nick Bright wrote at about 23:27:58 -0500 on Sunday, May 22, 2011:
On 5/22/2011 7:14 PM, Nick Bright wrote:
Sounds to me like the BackupPC_deleteFile script is the way to go:
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=BackupPC_DeleteFile
I found it some time after
Jeffrey J. Kosowsky wrote at about 10:51:01 -0400 on Monday, May 23, 2011:
Nick Bright wrote at about 23:27:58 -0500 on Sunday, May 22, 2011:
On 5/22/2011 7:14 PM, Nick Bright wrote:
Sounds to me like the BackupPC_deleteFile script is the way to go:
http://sourceforge.net/apps
Holger Parplies wrote at about 16:28:06 +0200 on Monday, May 23, 2011:
Hi,
Nick Bright wrote on 2011-05-22 23:27:58 -0500 [Re: [BackupPC-users] How to
delete specific files from backups? (with BackupPC_deleteFile.pl)]:
On 5/22/2011 7:14 PM, Nick Bright wrote:
Sounds to me like the
Holger Parplies wrote at about 17:22:27 +0200 on Monday, May 23, 2011:
I erraneously wrote to the list
Holger Parplies wrote on 2011-05-23 16:30:57 +0200 [Re: [BackupPC-users]
Archive without tarball - directly to the file system]:
Hallo,
[...]
sorry, that was meant to be
Nick Bright wrote at about 16:06:32 -0500 on Sunday, May 22, 2011:
On 5/22/2011 3:29 PM, Michael Stowe wrote:
Recently I had a bit of an error condition that generated several very,
very large files on the file system of a server being backed up by
BackupPC. This resulted in200GB of
Timothy J Massey wrote at about 14:00:43 -0400 on Tuesday, May 17, 2011:
Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 05/17/2011 01:11:16
PM:
I haven't noticed any NFS problems due to hard links.
I get approximately the same speed of transfer operations when I am
reading
Carl Wilhelm Soderstrom wrote at about 15:02:37 -0500 on Tuesday, May 17, 2011:
On 05/17 02:30 , Les Mikesell wrote:
On 5/17/2011 2:06 PM, Carl Wilhelm Soderstrom wrote:
My advice is to get a 3ware RAID card and whatever disks you like for it.
There's some sharp corners on the
Timothy J Massey wrote at about 11:25:12 -0400 on Tuesday, May 17, 2011:
many have tried to use pools
mounted via NFS, and few (none?) have succeeded. BackupPC is pretty hard
on filesystems, and NFS is fragile.
Not sure where you got that data from. I have been using NFS for 3
years to
lmirg...@microworld.org wrote at about 16:23:53 +0200 on Tuesday, May 17, 2011:
Hello backuppc-users,
I would like to backup all the machines of my company (12 laptops,
Windows/Mac/Linux) in a centralized way on a NAS device.
I like a lot BackupPC and if possible I would like to use it
Timothy J Massey wrote at about 11:48:05 -0400 on Tuesday, May 17, 2011:
The point was basically any block-level attachment (except maybe USB).
The problem comes from NFS' poor handling of the zillions of hard links
that BackupPC wants to use.
I haven't noticed any NFS problems due to
Timothy J Massey wrote at about 12:11:58 -0400 on Tuesday, May 17, 2011:
Given that Craig is working on a significant rewrite for BackupPC 4.0,
this may be something that would be best worked on there, while major
things are changing anyway.
Good point to consider..
Also, the new system
Les Mikesell wrote at about 11:20:12 -0500 on Tuesday, May 17, 2011:
On 5/17/2011 10:25 AM, Timothy J Massey wrote:
One option for using a NAS is to use a standard PC in front of it and
mount the NAS via iSCSI (**NOT** NFS!!!) and use the NAS' storage that
way. That will give you the
Michael Stowe wrote at about 11:57:42 -0500 on Tuesday, May 17, 2011:
Has anyone tried using BackupPC and MooseFS (http://www.moosefs.org/)?
No, but it seems like a *really* bad idea -- the concept of slow, off-box,
redundant storage isn't a really good fit with the concepts of
Steven Johnson wrote at about 14:35:02 -0400 on Monday, May 16, 2011:
Thank you for your reply; I would like to exclude all .bak files from ever
being backed up (in any share and its subfolders) but i cannot figure out
how to do this. I used the * for all shares but it only excludes
Michael Stowe wrote at about 10:45:01 -0500 on Wednesday, May 11, 2011:
Thought I'd mention that I'm in the process of moving my 1TB backup pool
from one jfs array to another on the same system. Both arrays are RAID-5
via mdadm.
I'm using rsync -avP -H src dest, which I started
Rob Sheldon wrote at about 10:42:03 -0700 on Wednesday, May 11, 2011:
On Wed, 11 May 2011 10:45:01 -0500, Michael Stowe wrote:
Thought I'd mention that I'm in the process of moving my 1TB backup
pool
from one jfs array to another on the same system. Both arrays are
RAID-5
via
Rob Sheldon wrote at about 12:04:15 -0700 on Wednesday, May 11, 2011:
On Wed, 11 May 2011 14:32:56 -0400, Jeffrey J. Kosowsky wrote:
How big was your pool and pc tree?
We currently have two BackupPC servers we work with -- one of them is
only about 20G of disk but is using
RYAN M. vAN GINNEKEN wrote at about 01:47:51 -0600 on Wednesday, May 4, 2011:
Thank you for your reply
Adjust the command line to work with FreeBSD not sure how to go about this?
I presume editing the config.pl script any hints? I checked that
/usr/bin/tar exits and wounder why would
Holger Parplies wrote at about 23:03:57 +0200 on Tuesday, May 3, 2011:
Hi,
Les Mikesell wrote on 2011-05-03 15:14:08 -0500 [Re: [BackupPC-users] Not
able to use the backuppc account in freebsd]:
I'm surprised that sudo doesn't honor the user's shell.
really? It's really not
RYAN M. vAN GINNEKEN wrote at about 01:42:42 -0600 on Tuesday, May 3, 2011:
Running BackupPC on FreeBSD 8.1 doing some troubleshooting, but cannot seem
to get this command to work. Also cannot seem to login as backuppc user no
shell and all that, please help
Les Mikesell wrote at about 12:26:04 -0500 on Tuesday, May 3, 2011:
On 5/3/2011 12:11 PM, Jeffrey J. Kosowsky wrote:
RYAN M. vAN GINNEKEN wrote at about 01:42:42 -0600 on Tuesday, May 3, 2011:
Running BackupPC on FreeBSD 8.1 doing some troubleshooting, but
cannot seem to get
I built my own (for FC12)...
Richard Shaw wrote at about 16:31:15 -0500 on Friday, April 29, 2011:
On Fri, Apr 29, 2011 at 3:40 PM, John Rouillard
rouilj-backu...@renesys.com wrote:
Current release is 3.2.0 released last July and I know there were some
gui improvements, but I am not sure
Adam Goryachev wrote at about 16:08:56 +1000 on Wednesday, April 27, 2011:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 27/04/11 15:44, Jeffrey J. Kosowsky wrote:
Les Mikesell wrote at about 12:08:22 -0500 on Tuesday, April 26,
2011:
On 4/26/2011 11:38 AM, Michael Conner
Adam Goryachev wrote at about 01:40:31 +1000 on Thursday, April 28, 2011:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 28/04/11 01:11, Michael Stowe wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've got a number of random people's home PC's that I backup (friends
is periodically removed and rotated offsite.
On 27/04/11 22:47, Jeffrey J. Kosowsky wrote:
I still think that losing all 3 (which however unlikely is still
possible) is way, way, way, worse than potentially losing 1-2 out of 3
and still having a spare to recover (carefully) from. And my case
Les Mikesell wrote at about 15:48:29 -0500 on Wednesday, April 27, 2011:
On 4/27/2011 3:18 PM, Jeffrey J. Kosowsky wrote:
Which is *precisely* what I was proposing except that in addition to
failing the device, I suggested removing it physically for extra
security (again assuming you
Matthias Meyer wrote at about 11:43:23 +0200 on Monday, April 25, 2011:
Hi Jeffrey,
Thanks for sending your perl script.
Hmmm... not sure I even remember which of my scripts you are talking about...
Unfortunately I can't answer you because:
- The following addresses had
Les Mikesell wrote at about 12:08:22 -0500 on Tuesday, April 26, 2011:
On 4/26/2011 11:38 AM, Michael Conner wrote:
I installed BPC a few weeks ago and have been doing testing and setup
since then and have things working pretty well on several linux, windows,
and mac clients
comfi wrote at about 15:57:45 -0700 on Monday, April 18, 2011:
I recently fired up BackupPC as a replacement for our convoluted
and outdated Amanda setup to backup an environment of about 200
servers. So far, I have BackupPC version 3.1.0 installed on an
Ubuntu 10.04 system. I'm using
comfi wrote at about 09:49:00 -0700 on Tuesday, April 19, 2011:
Thanks for all the responses, guys!
First off, I never meant to indict BackupPC. I've known all along
it was an NFS issue (hence the subject title and the observation
that everything runs perfectly under iSCSI). However,
Holger Parplies wrote at about 20:01:28 +0200 on Wednesday, April 20, 2011:
4.) There *was* an attempt to write a specialized BackupPC client (BackupPCd)
quite a while back. I believe this was given up for lack of human
resources. I always found this matter rather interesting, but
martin f krafft wrote at about 09:23:07 +0200 on Sunday, April 17, 2011:
Dear list,
we are facing a policy change requiring people to rename data files
in a trivial way (replace ':' with '-').
In terms of backuppc, this means that the files will have to be
transferred again,
of archives that difficult?
Jake Wilson
On Wed, Apr 13, 2011 at 3:33 PM, Jeffrey J. Kosowsky
backu...@kosowsky.orgwrote:
Jake Wilson wrote at about 11:54:13 -0600 on Wednesday, April 13, 2011:
In order to minimize cpu load on our servers at the office, I'd like to
make
Holger Parplies wrote at about 19:48:52 +0200 on Thursday, April 14, 2011:
And Jeffrey, if you could give me a pointer to the previous thread, I'll add
anything from there, or you could, of course, also do that yourself ;-).
Sure I posted the reference on my last reply to Jake...
Tyler J. Wagner wrote at about 14:02:26 +0100 on Wednesday, April 13, 2011:
On Wed, 2011-04-13 at 14:38 +0200, Sorin Srbu wrote:
Googling a bit I found out 32b linux is limited to 2GB per file on
account of the file system. 8GB is a lot more than 2GB, so this would
explain
the
Jake Wilson wrote at about 11:54:13 -0600 on Wednesday, April 13, 2011:
In order to minimize cpu load on our servers at the office, I'd like to make
sure that the full backups only occur on the weekends. Is there a
straightforward way to accomplish this in the interface or do I need to go
Timothy J Massey wrote at about 15:40:05 -0400 on Tuesday, April 12, 2011:
Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 04/10/2011 01:57:01
PM:
The only problem with dd is that you would generally need to either
make a snapshot (e.g., using lvm2) or shutdown BackupPC
Timothy J Massey wrote at about 18:49:57 -0400 on Tuesday, April 12, 2011:
Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 04/12/2011 05:20:07
PM:
Timothy J Massey wrote at about 15:43:28 -0400 on Tuesday, April 12,
2011:
I think #4 is underappreciated given how cheap hardware
Jake Wilson wrote at about 17:06:55 -0600 on Tuesday, April 12, 2011:
We have a production server on the network with several terabytes of data
that needs to be backed up onto our BackupPC server. The problem is that
the production server is in use most of the day. There is a lot of normal
Timothy J Massey wrote at about 18:44:06 -0400 on Tuesday, April 12, 2011:
Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 04/12/2011 05:10:15
PM:
Timothy J Massey wrote at about 15:40:05 -0400 on Tuesday, April 12,
2011:
Jeffrey J. Kosowsky backu...@kosowsky.org wrote
Timothy J Massey wrote at about 21:57:11 -0400 on Tuesday, April 12, 2011:
Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 04/12/2011 07:36:36
PM:
Just as a side point, my plugcomputer + USB drive is so small and
portable that I actually have it located inside my network box
Saturn2888 wrote at about 19:40:50 -0700 on Tuesday, April 12, 2011:
The BackupPC_copyPcPool script and the tar copy are great for backing up the
pool but both suffer the same fate as dd; there's no method of only
transferring changes of files which destroys available bandwidth and leaves
Timothy J Massey wrote at about 22:46:39 -0400 on Tuesday, April 12, 2011:
Timothy J Massey tmas...@obscorp.com wrote on 04/12/2011 10:13:11 PM:
But give it a try first: unless that production server is a 600MHz
machine with 512MB RAM and a single SATA spindle, you will most
Tod Detre wrote at about 09:53:57 -0400 on Monday, April 11, 2011:
For backing up my BackupPC pool I've found that using xfs on top of
lvm to be a great solution.
lvm allows you to make a snapshot of the pool. This is nice if you
only have a small window between when your servers stop
Les Mikesell wrote at about 00:16:32 -0500 on Sunday, April 10, 2011:
On 4/9/11 10:39 PM, Jeffrey J. Kosowsky wrote:
Les Mikesell wrote at about 11:31:28 -0500 on Saturday, April 9, 2011:
On 4/9/11 12:28 AM, Saturn2888 wrote:
:: mdadm ::
This is what I use, with a raid1
hans...@gmail.com wrote at about 17:19:09 +0700 on Sunday, April 10, 2011:
On Sun, Apr 10, 2011 at 12:16 PM, Les Mikesell lesmikes...@gmail.com wrote:
I've never heard of raid sync affecting the original disk(s). I've been
doing
it for years, first with a set of firewire external
Cesar Kawar wrote at about 14:39:58 +0200 on Sunday, April 10, 2011:
El 10/04/2011, a las 12:19, hans...@gmail.com escribió:
On Sun, Apr 10, 2011 at 12:16 PM, Les Mikesell lesmikes...@gmail.com
wrote:
I've never heard of raid sync affecting the original disk(s). I've been
Les Mikesell wrote at about 11:31:28 -0500 on Saturday, April 9, 2011:
On 4/9/11 12:28 AM, Saturn2888 wrote:
:: mdadm ::
This is what I use, with a raid1 created with 3 members but one missing. I
periodically rotate disks into a hotswap sata bay, add it long enough to
re-sync, and
Lee A. Connell wrote at about 09:02:22 -0400 on Tuesday, April 5, 2011:
I am having a strange issue on only one of my backed up hosts. I have 12
consecutive days of reported good backups, but when I go to click on one
of the backup numbers it is blank. I checked in the directory path
Timothy J Massey wrote at about 10:03:58 -0400 on Thursday, March 31, 2011:
Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 03/30/2011 04:11:05
PM:
Wouldn't a better/more robust solution be to define the blackout
period for that machine to exclude everything except for the weekend
nhoel...@sinet.ca wrote at about 09:59:43 -0400 on Thursday, March 31, 2011:
I am running backuppc 3.1.0-4 on a plug computer (ARM processor) with the
Perl rsync fix for ARM processors. On March 24, backuppc did an
incremental backup that picked up two 350MB files which had been uploaded
Timothy J Massey wrote at about 11:55:15 -0400 on Wednesday, March 30, 2011:
Bowie Bailey bowie_bai...@buc.com wrote on 03/30/2011 10:52:21 AM:
On 3/30/2011 10:16 AM, Scott wrote:
Full backups from one machine look like they are going to take 12
hours, so a night time full backup
Michael Conner wrote at about 08:37:33 -0500 on Wednesday, March 23, 2011:
I'm a newbie with limited Linux experience, but I've had BPC running for
about a week on a test machine running Centos 5.5. So far I've successfully
gotten backups for a Windows machine over SMB and another linux
Tyler J. Wagner wrote at about 16:26:28 + on Tuesday, March 22, 2011:
On Tue, 2011-03-22 at 10:47 -0500, Les Mikesell wrote:
Agreed - I've always thought it would be nice if backuppc were aware of
hosts grouped on a network route as well and could separately limit the
concurrency
Tyler J. Wagner wrote at about 17:24:29 + on Tuesday, March 22, 2011:
On Tue, 2011-03-22 at 13:09 -0400, Jeffrey J. Kosowsky wrote:
I think it would still be helpful to have a hook that would allow some
type of query of bandwidth available. Perhaps most users would never
use
Joe Konecny wrote at about 09:22:05 -0400 on Friday, March 18, 2011:
On 3/18/2011 5:00 AM, hans...@gmail.com wrote:
People answering question that have been already been answered
thousands of times are doing nothing but wasting time and bandwidth -
not just their's but everyone else's
Scott wrote at about 13:06:21 -0400 on Friday, March 18, 2011:
I am trying to figure out why my backuppc statistics are not generating
(everything is basically all 0's in the status page -- see below).
Notice pool is at 0gb...
So looking at the code, it loops through the
Timothy Murphy wrote at about 16:08:55 + on Friday, March 18, 2011:
Les Mikesell wrote:
I've seen memtest take 3 days to find bad RAM. Sometimes only certain
patterns/timing will fail.
The standard memtest86+ completes in about 90 minutes
on all the computers I have.
As
Les Mikesell wrote at about 10:49:33 -0500 on Friday, March 18, 2011:
On 3/18/2011 10:26 AM, Joe Konecny wrote:
On 3/18/2011 11:00 AM, Les Mikesell wrote:
But no one is selling a product here.
It is being sold in a metaphorical sense.
So, do I get a metaphorical commission if I
Les Mikesell wrote at about 13:50:00 -0500 on Friday, March 18, 2011:
On 3/18/2011 12:49 PM, Long V wrote:
2011-03-18 13:15:00 unexpected empty share name skipped
Seems odd to me. Do you have any trailing characters after your real
share name?
I have seen that (at least with
Timothy Murphy wrote at about 21:36:19 + on Friday, March 18, 2011:
Jeffrey J. Kosowsky wrote:
Timothy Murphy wrote at about 17:57:18 + on Friday, March 18, 2011:
In my (very long, I started with a paper-tape machine) experience
one almost always gets better advice
Bowie Bailey wrote at about 16:39:59 -0400 on Tuesday, March 15, 2011:
On 3/15/2011 3:49 PM, David Herring wrote:
I'm trying to backup windows servers with approx 3 partitions of 100G
each over a WAN link. This takes 'days' to run and never successfully
completes. I'm using rsyncd on
Holger Parplies wrote at about 06:04:23 +0100 on Monday, March 14, 2011:
Hi,
Jeffrey J. Kosowsky wrote on 2011-03-09 17:43:41 -0500 [Re: [BackupPC-users]
Still trying to understand reason for extremely slow backup speed...]:
[...] I actually used my program that I posted to copy
Cesar Kawar wrote at about 07:33:52 +0100 on Sunday, March 13, 2011:
Enviado desde mi iPhone
El 13/03/2011, a las 02:10, Jeffrey J. Kosowsky backu...@kosowsky.org
escribió:
Cesar Kawar wrote at about 23:07:53 +0100 on Friday, March 11, 2011:
El 11/03/2011, a las 21:13
Les Mikesell wrote at about 11:48:36 -0500 on Sunday, March 13, 2011:
On 3/11/11 11:27 AM, Cesar Kawar wrote:
I know that is only to process 800,000 files, but with version 3.0.0 and
later, it doesn't load all the files at once. With a 512 Mb computer
you'll be fine, but in the
Gerald Brandt wrote at about 16:13:23 -0600 on Friday, March 11, 2011:
Not that I care too much, but that method uses a non-trivial approach
that I originally developed, including code that is
*verbatim* copy-and-pasted without any attribution and without
GPL-license from my
Cesar Kawar wrote at about 23:07:53 +0100 on Friday, March 11, 2011:
El 11/03/2011, a las 21:13, Jeffrey J. Kosowsky escribió:
Cesar Kawar wrote at about 18:27:34 +0100 on Friday, March 11, 2011:
El 11/03/2011, a las 14:59, Jeffrey J. Kosowsky escribió:
Cesar Kawar wrote
Doug Lytle wrote at about 09:52:47 -0500 on Saturday, March 12, 2011:
Les Mikesell wrote:
Google popped this:
http://majentis.com/2011/01/03/backuppc-with-sshrsyncvss-on-windows-server/
Along this same subject line,
I'm currently setting up our Windows XP machines with
Cesar Kawar wrote at about 10:08:10 +0100 on Friday, March 11, 2011:
El 11/03/2011, a las 08:04, hans...@gmail.com escribió:
On Fri, Mar 11, 2011 at 10:56 AM, Rob Poe r...@poeweb.com wrote:
I'm using RSYNC to do backups of 2 BPC servers. It works swimmingly, you
plug the USB
Cesar Kawar wrote at about 18:27:34 +0100 on Friday, March 11, 2011:
El 11/03/2011, a las 14:59, Jeffrey J. Kosowsky escribió:
Cesar Kawar wrote at about 10:08:10 +0100 on Friday, March 11, 2011:
I think rsync uses little if any cpu -- after all, it doesn't do much
other than do
Cesar Kawar wrote at about 19:53:42 +0100 on Friday, March 11, 2011:
El 11/03/2011, a las 18:34, Michael Stowe escribió:
What's the current state of the art of doing windows backups? I've just
been using smb and letting a commercial system run separately to cover
the open file and
hans...@gmail.com wrote at about 04:04:04 +0700 on Saturday, March 12, 2011:
On Fri, Mar 11, 2011 at 9:05 PM, Les Mikesell lesmikes...@gmail.com wrote:
It is the number of files with more than one link that matter, not so much
the
total size. But the newer rsync that doesn't need the
Michael Conner wrote at about 12:55:58 -0600 on Thursday, March 10, 2011:
Thanks to all who replied. You all basically confirmed my feeling that using
our web server as the backup server was not best practice. I just hoped we
might get by without buying another computer, even though it
Carl Wilhelm Soderstrom wrote at about 13:35:31 -0600 on Thursday, March 10,
2011:
On 03/10 12:55 , Michael Conner wrote:
One additional question: are there any advantages to any particular flavor
of Linux for BPC?
Debian.
Just because it's the best distro ever of all time and
Michael Conner wrote at about 14:46:21 -0600 on Thursday, March 10, 2011:
That is good to know. Actually things are a little better than I thought,
the spare machine is Dell Dimension 2400 with a Pentium 4, max 2 gb memory.
So I guess I could slap a new bigger drive into it and use it. My
Tyler J. Wagner wrote at about 23:00:47 + on Thursday, March 10, 2011:
On Thu, 2011-03-10 at 20:10 +0100, Cesar Kawar wrote:
El 10/03/2011, a las 19:55, Michael Conner escribió:
One additional question: are there any advantages to any particular
flavor of Linux for BPC?
I
I run BackuppPC 3.2.0 under Fedora 12 on a P4 2.8GHz server with 2GB
of RAM and with storage mounted via NFS on a low-end NAS (DNS-323) on
a 100MB/sec LAN. Before making a number of changes to both server and
NAS (I know bad idea), I would get reasonable backup speeds where for
example an
Les Mikesell wrote at about 14:58:30 -0600 on Wednesday, March 9, 2011:
On 3/9/2011 2:04 PM, Jeffrey J. Kosowsky wrote:
On the NAS, I upgraded the kernel to a debian kernel, changed from
ext2 to lvm2, and changed from RAID1 to non-RAID (which together only
slightly affected NFS
Cesar Kawar wrote at about 22:51:40 +0100 on Wednesday, March 9, 2011:
I asume you changed disks on the nas, and you had to manually replicate the
pool from the old storage without lvm support to the new one lvm based
If that is correct, my guess is that the pool or the cpool or both
Timothy Murphy wrote at about 13:30:16 + on Tuesday, March 8, 2011:
hans...@gmail.com wrote:
if a user takes the
attitude that a program should have well-written documentation
designed for non-technical users to understand, and any program that
doesn't is somehow deficient in
Brad Alexander wrote at about 14:13:22 -0500 on Friday, March 4, 2011:
Thats a good point, Les. I also thought that with backuppc 4.0 on the way,
I'd mention maybe having some test options within backuppc itself. I was
more thinking out loud, and this thread kind of gelled the need in my
Just a plug that Craig Barratt -- the BackupPC creator -- has posted
several detailed emails to the developers list
(backuppc-de...@lists.sourceforge.net) outlining the key design and
feature changes that he is implementing in 4.x.
From all appearances, 4.x is a *substantial* rewrite and includes
Timothy J Massey wrote at about 23:13:52 -0500 on Tuesday, February 22, 2011:
Dennis Blewett dennis.blew...@gmail.com wrote on 02/22/2011 10:17:29 PM:
13,849 items, totalling 3.8 GB
It would appear that I have a feasible number of files. I'm not sure
how many more files I will
John Goerzen wrote at about 14:26:33 + on Wednesday, February 23, 2011:
Carl Wilhelm Soderstrom chrome at real-time.com writes:
On 02/21 11:00 , Dennis Blewett wrote:
Will I come across many problems in later restoring the pool's data if I
just rsync /var/lib/backuppc to
John Goerzen wrote at about 08:44:54 -0600 on Wednesday, February 23, 2011:
BackupPC is being surprisingly slow even for incrementals. This appears
to be tied to the rsync backend in some fashion.
Here's an example. It took 5 *HOURS* to run an incremental over a
machine in which the
Here is an updated and refined version of my routine
BackupPC_digestVerify.pl that can verify, fix, or add rsync digests --
both the md4 block and file checksums -- to any compressed BackupPC
file. It can operate either on a single file or on a directory tree of
files.
See the usage for more
gregwm wrote at about 10:26:51 -0600 on Tuesday, February 22, 2011:
rsync'ing the BackupPC data pool is generally recommended against. The
number of hardlinks causes an explosive growth in memory consumption by
rsync and while you may be able to get away with it if you have 20GB of
101 - 200 of 777 matches
Mail list logo