On 08/15 10:31 , Julian Robbins wrote:
I am missing something, or is it right that my 'normal' pool is always
empty?
compression for files is turned on by default. you only get files in your
uncompressed pool, if you explicitly turn off compression (either
system-wide, or per-host).
for
On 08/19 08:02 , Paul Fox wrote:
rsync does, but BackupPC's File::RsyncP 0.52 doesn't support
the -H option. So BackupPC's rsync doesn't support hardlinks.
craig -- thanks for confirming that. i was pretty sure i
remembered something was wrong with hard links.
I stand corrected. I
On 08/31 12:56 , Scott Gamble wrote:
Is it possible to backup Novell servers with Backuppc?
should be. I'm pretty sure rsyncd is available on Netware (at least it was
on the 6.5 box I looked at). Otherwise, if you can mount the filesystem, you
can back it up.
--
Carl Soderstrom
Systems
On 09/03 06:22 , Hamish Guthrie wrote:
I am sorry to harp on about this, but, it gets back to my analysis of
rsync under backuppc a few months ago - I still think that we need to do
an implementation of File::RsyncP in C as opposed to perl.
Your point is well-taken and I'm willing to accept
On 09/05 09:12 , Ed Burgstaler wrote:
I am currently using a 40Gg HD with LVM and I'm running out of space.
Can anyone tell me how I can go about adding a new 300Gg HD to CentOS4 to
expand my BackupPC data storage?
Appreciate a step by step if not to much to ask. I'm not a Linux expert
On 09/12 07:03 , dosseh edjé wrote:
Error connecting to rsyn c daemon at $host:
873:inet connect:connection refusée
do you have a newline at the end of your rsyncd.secrets file?
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com
I'm running out of disk space on my backup server, and it's run out of space
on a couple of occasions. when it does this, some hosts 'forget' all their old
backups -- those backups no longer appear in the pc/hostname/backups file.
I know the backups.old file has a copy of the last known-good
On 09/13 10:38 , Craig Barratt wrote:
Anyhow, this is something I should work on. Do you need a solution
quickly?
no, it's not critical. just wanted to let you know that there are some
failure modes that make certain backup information difficult to access.
if nothing else, I should be able
On 09/14 10:09 , Kanwar Ranbir Sandhu wrote:
On Tue, 2005-13-09 at 22:38 -0700, Craig Barratt wrote:
Anyhow, this is something I should work on. Do you need a solution
quickly?
I know Carl said it's not urgent, so how about setting the target for
the next BackupPC release?
it would be
On 09/15 02:21 , Nicolai Rasmussen wrote:
- Is it possible to define a retention period for deleted files? Say,
I delete a file from my file server, then I would like the file to
continue to be available on the backup server, for some given time...
yes. you can configure how long to
On 09/21 08:38 , Les Mikesell wrote:
Cacti uses snmp so it can monitor routers and managed switches
as well as any host with snmp enabled. If you only want the
local host traffic or can configure your switches to bridge
traffic to a monitoring port, ntop is really nice:
http://www.ntop.org.
On 09/21 01:39 , Justin Pessa wrote:
I've BackupPC to keep 5 incremental backups and have old backups removed
from the pool after 20 days. I've got 24 hosts and together they are using
an entire 250gb disk. As result the backups have reached the 99% threshhold
and do not run.
could it be
On 10/24 01:44 , Jason Purdy wrote:
Then I would install Debian (thanks to Ludovic for the packaging!) with
a reiser fs -- what kind of partition scheme would you use?
put /var/lib/backuppc on its own partition. This way:
- you can mount it with different options for better performance; like
On 10/29 06:11 , Rich Duzenbury wrote:
Lastly, which is the most appropriate file system to use on the backuppc
box? I've used ext3 in the past, but I don't know if that is the best
option.
I've been using Reiserfs; other people report good success with XFS. Might
be informative for someone
On 11/03 11:26 , Nicolai Rasmussen wrote:
Have anyone tried making LVM partitions on USB devices?
yes. it works just fine. performance will suck tho.
Is there any sense in doing this *?*
If the machine catches fire, it might not burn up the external drive.
I do it on a workstation where I
On 11/05 08:28 , Nate Carlson wrote:
On Thu, 3 Nov 2005, Carl Wilhelm Soderstrom wrote:
the config.pl file makes mention of an $I_option variable for smb
backups; but it's never mentioned how to set that. I want to set a '-I
192.168.1.1' (hypothetically speaking) for a particular
On 11/11 07:48 , Les Mikesell wrote:
In theory 'rsync -aH' could copy the archive area, but it will likely be
too slow to be practical due to all the hardlinks.
there's also the issue of rsync's memory consumption, which can be pretty
bad on a big tree of hardlinked files. On a machine with
On 11/11 12:26 , Les Mikesell wrote:
If that's a common problem, you might want to change your mailbox
format to maildir (non-trivial but probably worth it). That will
also make backuppc more efficient since the unchanged files will
all become links in the pool.
maildir format really makes a
On 11/14 09:48 , Patrick Friedel wrote:
I know I could set blackout periods, but those seem to be more aligned
towards not starting a backup during the business day rather than what
I'm looking for. Or are they more powerful than I'm giving them credit for?
set:
$Conf{FullPeriod} = -1;
On 11/13 03:38 , Craig Barratt wrote:
For the next release I plan to decouple BackupPC_dump
and BackupPC_restore from BackupPC_nightly, which will
solve your problem.
how are you planning on dealing with the possible race conditions and the
like?
--
Carl Soderstrom
Systems Administrator
On 11/13 03:58 , Craig Barratt wrote:
It value of the I_option is hardcoded as you see above, but it
should be whatever IP address is returned by nmblookup. So if
$Conf{NmbLookupFindHostCmd} is setup correctly, the -I option should
be set to the correct IP address. Perhaps the Win98 machine
On 11/10 06:04 , Trey Nolen wrote:
Unfortunately, due to space requirements, we *have* to use compression. Is
there anything I could use on the command line to maybe do a dummy restore
and pipe the output through the virus scanner? That would require a ton of
processor work, but it might be
On 11/16 10:42 , Les Mikesell wrote:
Even when rsync does a full, it just sends the filename list, then
exchanges block checksums over the files that already match. This
may take a long time but it uses very little bandwidth for the
unchanged files.
unfortunately, it seems that if the last
On 11/15 10:23 , Craig Barratt wrote:
Here's the plan. Can anyone see a problem with doing this?
BackupPC_nightly and BackupPC_link are still mutually exclusive.
But BackupPC_dump can overlap BackupPC_nightly.
This would be great, because under the current scheme, when backups take
more than
On 11/17 11:13 , Les Mikesell wrote:
The remote rsync doesn't send the hash matching the backuppc naming
scheme, so it can't identify matches that aren't in the previous
tree for that host.
oh. I thought that the backuppc pool heirarchy was just based on the md5sum
(or similar hash) of the
On 11/23 11:06 , Zolid, Jesper Haggren wrote:
Can anyone tell me whats wrong in my config?
You're using a comma (,) instead of a period (.).
$Conf{FullPeriod} = 29,97;
$Conf{IncrPeriod} = 0,97;
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com
On 11/29 08:52 , Les Stott wrote:
Why doesn't anyone like running rsyncd on a windows box standalone?
It's not encrypted. Neither for the transfer, nor for the authentication.
Don't assume that your local network is safe. :)
2. rsyncd can be set to only allow connections from a single host
On 11/28 11:44 , Craig Barratt wrote:
Actually, the rsyncd authentication is a challenge/response type
using an MD4 hash, not plain text. The module and user name are
plain text. Sniffing would allow a dictionary-style attack (like
any challenge/response system) which would be successful if
On 12/19 09:40 , Keri Alleyne wrote:
Can anyone see this?
Thanks.
Nope. didn't get it. please resend.
;)
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com
---
This SF.net email is sponsored by: Splunk Inc.
On 12/20 12:42 , Robin Lee Powell wrote:
su - backuppc -c /usr/share/backuppc/bin/BackupPC_tarCreate -h ball -n 555
-s ball2 \Games/MoOII\ \!/tmp/crap.tar
I get:
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'ball2/Games/MoOII'
What's the escaped '!' in there for?
--
On 12/20 07:37 , Robin Lee Powell wrote:
What's the escaped '!' in there for?
zsh for overwrite this file even if it exists. I got sick of
destroying things by accident with , so I told zsh to not let me
do that.
ah. cool. I've heard that some people really like zsh; but I've not learned
On 12/21 06:53 , Guus Houtzager wrote:
Ok, here are the scripts.
cool. this is just in time for the server migration I need to do right now.
I'm planning on doing it over nfs rather than rsyncd or rsync-over-ssh; but
I might give these scripts a bit of a try.
I was originally planning on just
I got the files all copied over from the old backup server to the new backup
server in record time by dd'ing the partition and sending it across the wire
with netcat. (then writing it to the new partition on the other machine and
using resize_reiserfs to take advantage of the new space).
On 12/28 02:19 , [EMAIL PROTECTED] wrote:
is the slowness due to system resources (disk IO, for example, or a
huge memory requirement for chown(1) to maintain a list of files to
chown(2))
the box has 2GB RAM, and chown itself doesn't seem to be taking more than a
few KB
On 01/12 11:59 , Richard Smith wrote:
On 1/12/06, Carl Wilhelm Soderstrom [EMAIL PROTECTED] wrote: On 01/12
10:02 , Richard Smith wrote: Do I have to force a full backup if I add a
new share to a host? yes.
Ok. Well then is there a way to schedule a full backup to occur? Ireally
don't
On 01/12 01:28 , [EMAIL PROTECTED] wrote:
we are working on an hardware product using an ARM architecture and running
a form of embedded linux. We are very much charmed by the functionality
BackupPPC provides. We want to make this avaialble to our users, but, we do
not have Perl installed and
On 01/28 08:26 , David Rees wrote:
Is the backup RAID array at least the same size as the original? Your
best bet is probably to use tar + gzip to send the whole raw partition
over the network to the remote machine, but for this to work, you will
need to be able to unmount the primary so that
On 01/27 10:08 , Sean Gleason wrote:
I am currently running BackupPC to a number of remote offices and have
been recently asked to add another office that has way more data than
the bandwidth could support during the initial FULL
what I ended up doing when I ran into this issue, was breaking
On 01/31 02:25 , [EMAIL PROTECTED] wrote:
Is there any way to schedule a backup at a particular hour daily?
the tool for that is BackupPC_serverMesg, and you can run it out of cron.
Here's what I have in my notes, which includes a message from Craig, I
think.
BackupPC_serverMesg is used to
I'm backing up one (last!) win9x box over SMB, and I can't exclude files.
In the per-host config file, I have the following line:
$Conf{BackupFilesExclude} = {
'c' = ['/RECYCLER/*', '/temp/*', '/WUTemp/*', '/WINDOWS/*',
'*/Temporary?Internet?Files/*' ]
};
AFAIK, this should exclude the
On 02/09 10:42 , Lachlan Simpson wrote:
Does backuppc have the ability to encrypt backups? If I want to dump a backup
onto a
removable HD and take it away.etc etc
You could put an encrypted filesystem onto the HDD, put your data pool on
that filesystem, and encrypt it that way.
Look into
On 02/13 07:34 , Craig Barratt wrote:
Putting perl code in the config file has some drawbacks. First, it won't
work with the new config editor (but that's not released yet). Second,
any code that takes time to execute (eg: contacting a client to list its
modules) will make the CGI script run
On 02/21 07:36 , [EMAIL PROTECTED] wrote:
I was thinking of trying Ubuntu What is everyone out
there using for this?
Debian is the only way to go. :) Fastest install, easiest upgrades, least
hassle. the amd64 version is still pretty new; but the 32-bit version is
fine for now. (that's what
On 02/17 02:58 , Ken Walker wrote:
I'm getting access denied errors on the local machine backup
that's because your backup is running as an ordinary user, and you need to
be root in order to read some files. try redefining your tar command like
this (in the localhost.pl file).
On 03/06 06:12 , Siju George wrote:
Xfer PIDs are now 3365
Rsync command pid is 3365
Got remote protocol 1836213584
Fatal error (bad version): Permission denied (publickey,keyboard-interactive).
are you invoking sudo or any other command on the remote side? (a command
which isn't working?)
I'm experimenting with an external firewire drive enclosure, and I formatted
it with 3 different filesystems, then used bonnie++ to generate 10GB of
sequential data, and 1,024,000 small files between 1000 and 100 bytes in
size.
I tried it with xfs, reiserfs, and ext3; and contrary to a lot of
On 03/07 04:43 , Guus Houtzager wrote:
I think you're right. I have 2 suggestions for additional testing. It's my
experience that backuppc became really really slow after a few weeks when
more data began to accumulate. Could you test ext3 again, but with a few
million more files? I'm also
On 03/07 09:54 , Les Mikesell wrote:
See if you can find a benchmark program called 'postmark'. This
used to be available from NetApp but I haven't been able to find
a copy recently. It specifically tests creation and deletion
of lots of small files. When I used it years ago it showed
the
On 03/07 08:14 , David Brown wrote:
Unfortunately, the resultant filesystem has very little resemblance to the
file tree that backuppc writes. I'm not sure if there is any utility that
creates this kind of tree, and I would argue that backuppc shouldn't be
either, since it is so hard on the
On 03/08 12:56 , Brendan Simon wrote:
I am using $Conf{ClientNameAlias} quite happily with rsync/ssh unix
hosts, but I can't get it to work with a Windows/smb server.
I assume it should work, or is this a bad assumption ???
it certainly works for me.
what happens when you try to su to the
On 03/08 12:52 , Ken Long wrote:
Works fine as long as you're dedicating a partion/disk/volume to
BackupPC.
In fact, I would *recommend* that you do this. There are a number of
reasons.
- if the main disk fails, you can recover information off the backups to
re-set it up.
- if the backup
On 03/08 11:12 , Dan D Niles wrote:
Is there a way to stop the emails for this host?
$Conf{EMailNotifyMinDays} = 365;
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com
---
This SF.Net email is sponsored by
I tried running this command:
[EMAIL PROTECTED]:~$ /usr/share/backuppc/bin/BackupPC_serverMesg
srv.example.tld srv.example.tld backuppc 0
and got this following error:
Got reply: error: bad command srv.example.tld srv.example.tld backuppc 0
I did eventually figure out the problem with my
On 03/08 09:09 , mna.news wrote:
the syntaxe should be :
/usr/share/backuppc/bin/BackupPC_serverMesg backup srv.example.tld
srv.example.tld backuppc 0
yep. I figured it out eventualy. thanks for the help tho. :)
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
On 03/09 01:43 , David Brown wrote:
Does --whole-file help, or is it slow even at that?
dunno, never tried that option. :)
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com
---
This SF.Net email is sponsored
On 03/15 11:28 , Khaled Hussain wrote:
As far as I understand, with raid5, all raid5 partitions have to be the same
size, so why aren't sda3, sdb1, sdc1 not the same size?
With linux software RAID, the partitions don't have to be the same size
(last I knew). You only get data protection up to
On 03/15 03:44 , Guus Houtzager wrote:
It will use the largest common size of all 3 partitions. So if you've
got for instance sda3 = 90 GB, sdb1 = 100 GB and sdc1 = 95 GB, your
raid5 will be built from 3 x 90 GB. The leftover space of the partitions
sdb1 and sdc1 will not be used.
thanks for
On 03/15 10:50 , Matt wrote:
Furthermore, I find the setup with raid controllers tedious: The
modules often don't come with the distro and more often than not they do
not load at boot forcing me to tweak /etc/rc.local.
haven't had that problem with 3ware controllers. linux kernels have
On 03/15 05:05 , Les Mikesell wrote:
That is probably the best approach, but note that if you boot
with grub you need to manually install it on the 2nd drive
of the raid (or both, depending on the OS distribution and version).
Also, many IDE disk failure modes will keep the machine from
On 03/16 03:16 , Guus Houtzager wrote:
Speaking of 3dm: my experiences with that are not so good.
- no debian package I could find
we ended up building our own.
- bit of a weird startup procedure without clear messages if it started OK or
not (and why it wouldn't start)
- it had a memory
On 03/28 01:58 , Khaled Hussain wrote:
Is it possible to configure backupPC to run full backups on Saturdays
throughout the day, and even on Sundays?
set up a cron job like this:
00 18 * * 6 backuppc /usr/share/backuppc/bin/BackupPC_serverMesg backup
server.example.com server.example.com
On 03/30 12:35 , Tomasz Chmielewski wrote:
Unfortunately, when I add it $Conf{RsyncArgs}, rsync transfers don't
work anymore:
BackupPC doesn't use the regular rsync tool on the server side, it uses a
perl module to do the same thing. Unfortunately, this means that some
features of rsync do not
On 03/29 10:48 , Jos van der sanden wrote:
Is there a manual how i can schedule a backup?
if you need to make a particular backup go off at a particular time, set up
a cron job like this:
00 18 * * 6 backuppc /usr/share/backuppc/bin/BackupPC_serverMesg backup
server.example.com
On 04/03 10:20 , Tomasz Chmielewski wrote:
compression is one of them.
Umm, that's bad, it was a killer feature for doing remote backups over
internet.
I do the compression at the SSH layer. it might not be quite as good; but it
works, and saves a lot on bandwidth.
--
Carl Soderstrom
On 04/03 04:46 , Dan D Niles wrote:
I am seing hundreds of thousands of these in my log:
create d 755 0/0 512 usr
create d 755 0/0 512 var
Can't open /backup/BackupPC/pc/host/new//f%2f/ for empty output
create 0 / 0
Can't open
On 04/06 10:23 , Craig Barratt wrote:
Carl Wilhelm Soderstrom writes:
Rather than have backuppc schedule the number of simultaneous backups based
on a number of jobs; how about having it schedule based on the current
system load? So if the load is 2 for instance, don't start any new
On 05/17 08:48 , Duane Rinehart wrote:
tarCreate is:
#!/bin/sh -f
exec /bin/tar -c $*
what's this script for?
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com
---
Using Tomcat but need to do more? Need
I just tried Elias's patched rsyncd-for-windows with VSS support
(http://users.tkk.fi/~epenttil/rsync-vss/); but I
can't get it to work on either of the machines I installed it on. (One
W2K3EE, and a WXP Home).
Even if I just tried to run rsyncd.exe from the command line (with the
'--daemon
On 05/24 05:52 , Travis Fraser wrote:
I got it to work. There is a typo in the rsync.conf file relating to the
name of the folder it is in (Program Files/rsync). I tried changing the
folder name to match the file contents, but no luck. Just change in the
rsync.conf file the locations of the
On 05/25 10:33 , Vincent Fleuranceau wrote:
I don't know if there is an RPM version for SuSE, but I think you'd
better do a manual install. I've tried to build my own RPM package for
Mandriva but I simply gave up...
I build an RPM for RH7.3 a while ago, but it didn't work perfectly. If
someone
On 07/07 04:52 , Andy B wrote:
Could anyone suggest the best way to go about moving the pool while
preserving hard-links etc?
if at all possible, dd is the way to go. otherwise the hard links will kill
performance.
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
On 07/11 08:17 , ken wrote:
I use qmail on fedora. I would like to be able to use a username/password
to send mail and I cannot find where you would enter the password in
backuppc, or if it is possible. Any ideas?
AFAICT, you're misunderstanding how mail works on Unix.
Backuppc calls some
On 07/12 09:04 , Joachim Sturm wrote:
Hi all,
I am just doing my first steps with backuppc and have a Problem.
$Conf{BackupFilesOnly} = ['/\Documents and settings\/achim'];
It is not working. It coverts to: /\\Documents\ and\ Settings\\/achim
How to mask the whitespaces correctly?
try
On 07/13 08:38 , Harry Mangalam wrote:
However, doesn't this also mean that as well as restoring their own
files, they could also restore other hosts' files? Is there a
protection against users restoring files from hosts that they don't
own?
yes. If the user isn't listed on the
On 07/14 10:07 , Harry Mangalam wrote:
The downside of this for larger installation tho is that it requires
manual intervention to establish the shared ssh keys (for root!) to
allow remote tar'ing of file, no?
you should be able to set up a special account on each client machine, which
uses
On 07/14 10:33 , Harry Mangalam wrote:
But then doesn't this require even more manual intervention? Set up
the extra user, mod the sudoers file? Or is there a way to automate
this?
For Debian Linux, our company built a package to automate this process. :)
I don't know what kind of
On 07/19 03:55 , JP Vossen wrote:
Suggestions on how to recover?
I'm not a filesystem expert, but in a decent many years of fsck'ing
filesystems, I haven't seen a lot of alternatives to just letting it go
ahead and try to 'fix' everything.
I use Reiserfs myself, on the backuppc data pool
On 07/25 12:12 , Mark Miksis wrote:
This means that I will want to have certain critical files
available (and backed up) somewhere outside of the BackupPC infrastructure:
- BackupPC source or RPM (can be downloaded)
- File::RsyncP source or RPM (can be downloaded)
- any changes to default
On 08/08 02:02 , Paul Estes wrote:
I accidently dumped a 4GB file into a partition that gets backed up by
BackupPC. I don't need it backed up, and I keep a fairly long history of
backups in the pool, so I don't want to wait for it to age out of the
pool. Is there any way to remove this one
On 08/13 09:50 , David Koski wrote:
they are definitely are there in the filesystem in mangled format.
you can use BackupPC_zcat to get the original file out of the storage
format. You'll have to do this on a per-file basis, but at least it's
possible. :)
--
Carl Soderstrom
Systems
On 08/15 02:33 , Nathan Barham wrote:
I'm trying to set up BackupPC on Debian Sarge, using rsync over ssh to
backup another Linux box. I don't want an empty passphrase on
backuppc's ssh key, so I have done these things ...
looks like you're doing ssh as root.
what I did instead, was set up an
here's my doc on how to set up passwordless logins using sudo for additional
security. I probably ought to post this to my website, but I'm entirely too
busy and lazy to get around to that. ;)
-=Introduction=-
Here's a quick dirty explanation of how to set up a client machine to be
backed up
On 08/16 12:23 , Cristian Tibirna wrote:
On Wednesday, 16 August 2006 11:48, Carl Wilhelm Soderstrom wrote:
-=On the Server=-
Become root on the server again; then go to /etc/backuppc.
Copy one of the other per-host configuration files and edit the options
therein, if necessary.
Most
On 08/17 09:16 , David Simpson wrote:
How do I prevent BackupPC from deleting the partially transfered file?
here's what I've done:
- turn up your timeout to some ridiculously large figure. just put an extra
zero or two at the end of the default value. Yeah, there's a possibility
that this will
On 08/17 04:42 , Nathan Barham wrote:
Thanks for posting this (and for writing it in the first place). It's a
good idea, and the only thing that prevents me from doing it is that
I'll have to do it too many times, and I'll never remember what I did 6
months from now. Perhaps a script is in
On 08/18 03:47 , David Simpson wrote:
Carl Wilhelm Soderstrom wrote:
- turn up your timeout to some ridiculously large figure. just put an extra
zero or two at the end of the default value. Yeah, there's a possibility
that this will cause a hung backup to tie things up for longer than
on a backuppc 2.1.1 installation which I have, e-mail alerts about hosts not
being backed up, are sent to the user which is listed as owning that host in
the 'hosts' file. so for instance:
# grep pc32 hosts
pc320 backuppc
the user '[EMAIL PROTECTED]' will get alerts about pc32 not
On 09/01 12:30 , Craig Barratt wrote:
Carl Wilhelm Soderstrom writes:
on a backuppc 2.1.1 installation which I have, e-mail alerts about hosts not
being backed up, are sent to the user which is listed as owning that host in
the 'hosts' file. so for instance:
# grep pc32 hosts
pc32
This is with backuppc 2.1.1-2sarge1 (as is obvious, a debian package) and
rsyncp-0.52 on the server.
I just restored a whole machine from backup, using Knoppix 5.0.1*; and
discovered that some symlinks were not restored correctly.
it seems to be localized to symlinks that point outside the same
Hmm, I just tried downloading the setpci symlink file via the web interface;
and all I got was a text file containing some text which described where the
link pointed to. OK, fair enough, it's not a good test since symlinks aren't
designed to be copied over HTTP.
Tried restoring the file to
Oddly, the problem seems to be related to netcat. If I capture the output of
BackupPC_tarCreate in a tarball, then scp that to the target machine, I can
unpack the files and have the symlinks re-created properly.
it seems to be only when I pipe the files through netcat that the problem
emerges.
On 10/04 09:42 , Eric Stockbridge wrote:
can backuppc be used with rsync to backup from an offsite location?
are you talking about replicating a backuppc server to an offsite location?
if so, the answer is not well. the number of files tends to run rsync out
of memory. smaller installations of
On 10/10 07:21 , Ashley Shaw wrote:
I had a strange error a couple of weeks ago and it is still occurring. It
happens when using the direct restore method, see this error message:
snip
Host key verification failed.
you need to verify that the ssh host keys haven't changed, if they ever
worked
On 10/25 07:07 , Philip Mötteli wrote:
Somehow, I can't restore from my backup. I always have the following
errors:
# BackupPC_restore 127.0.0.1 localhost Users
restore failed: couldn't do /Backup/pc/localhost/Users: No such file
or directory
Is there a way to restore
On 10/26 02:50 , Philip Mötteli wrote:
I tried, but it doesn't work and I have no hint, where the problem is:
# BackupPC_tarCreate -h localhost -n 1 -s /etc/apache2 -t
usage: /usr/local/bin/BackupPC_tarCreate [options] files/directories...
here's a couple of examples out of my
On 10/28 10:23 , Rodrigo Real wrote:
Can't you install rsync on 10.0.0.254? That should be the easiest way,
rsync can preserve hard links with the -H option, additionally it
would transfer only the differences between the two hosts.
on the topic of server replication; the problem with using
On 11/10 04:49 , GATOUILLAT Pierre-Damien wrote:
Perhaps with dd ? Something like :
(on the old server)#dd if=/dev/old_partition | ssh new_server dd
of=/dev/new_partition (perhaps indicate the bs= ? But with which ?)
Or with netcat and dd like that :
(new_server)#nc -l -p 1
On 11/13 10:40 , Jacob S wrote:
If I understand backuppc correctly, a backup can not take longer than
24 hours or it will end the job. (At least, that's the way it seems to
work on my backup servers.)
no, there's a value set in config.pl:
$Conf{ClientTimeout} = 7200;
(at least this is the
On 11/13 01:41 , Jacob S wrote:
Reading about ClientTimeout in config.pl, it seems it only affects how
long backuppc waits to get data, not the overall time the job takes.
that's the theory. in practice it doesn't always work that way, tho Craig's
been trying to fix it. :)
what version are
On 11/14 11:18 , GIAN EBOLI wrote:
now i want change the folder were the backup sets are being created.
there's really no way to change where the backups are stored, via the
configuration files.
Best thing to do would be either to use a bind mount or a symlink to link
the new location to the
1 - 100 of 675 matches
Mail list logo