We have a CMS that basically stores user data in a fs structure such as
/users/a/b/abraham/. Whenever a user edits one of their own files, the
webapp will touch a file in a specific location such as
/activeUsers/abraham. We use a predump script that quickly generates a
list of recently activ
I'd like to run "full" backups at night (say, 10pm-2am), but run
incrementals every 2 hours from 6am-6pm. There doesn't seem to be any
way to do this. Unless, maybe I can use a predump script to test the
time and $type and abort fulls that try to run during the day? It would
be annoying to s
It looks like the only thing open when backuppc is running and idle is
the /backuppc/log/LOG file.
So, if you symlink that dir to somewhere on another filesystem, I don't
see why you can't use automount or
maybe pre/post scripts to achieve what you want, for whatever reason :-)
brien
Roger S
Are you using rsync -H, to preserve hard links? You may find it
unbearably memory/time/resource intensive to use rsync for this.
Since you are using lvm (assuming you have some unused space), you could
create a snapshot (lvcreate -s) and then dump the raw block device over
ssh (or nc). (dd
Most NFS servers are pitifully slow compared to a local filesystem,
particularly when dealing with many small files. It pains me to think
about how slow that might get-- is anyone else using a non-local
filesystem?
brien
Simon Köstlin wrote:
I think TCP is a safer connection or plays that
Has anyone tried using mdns//bonjour for clients? Macs have it enabled
by default, most linux distros have it, although not enabled, and you
can download it for Windows... Then you wouldn't have to do anything
special; they'd be normal dns lookups-- fred.local, etc.
brien
James Kyle wrote:
>
The ".filename" stuff is called AppleDouble and I think it preserves
metadata as well in there.
Are you also getting a ton of xfer errors as below? (I did)
/usr/bin/tar: /tmp/tar.md.Fif1QE: Cannot stat: No such file or directory
/usr/bin/tar: /tmp/tar.md.b7XePQ: Cannot stat: No such file or di
You can just make a pre-dump script that does ssh -l root $1
"/etc/init.d/apache stop", and pass in the $hostIP. Then a post-dump
script that starts them up again...
brien
Dave Fancella wrote:
> All,
>
> So is there a way to have backuppc shut down all the servers on a machine
> before it ru
Only other OSX clients with rsync (not backuppc's perl implementation
of rsync) and an HFS filesystem will be able to handle the -E stuff. I
think the consensus is to use xtar for mac clients, as it creates a
standard tar archive that can be extracted on regular filesystems
(putting meta-stuff
How can I get this to work? I am storing the data inside lasttime.txt
(don't ask why) :-)
$Conf{TarIncrArgs} = '--newer=`cat lasttime.txt` $fileList+';
the shell command within ` ` does not get executed, so of course this
doesn't work at all. Any ideas?
Thanks!
Brien
---
This is going to be unsupported, I know, but for my own amusement (and
possibly yours!) can someone help me understand the ramifications of
subverting some of the tar options during the backups?
Specifically, take this scenario:
#1 full backup of / (tar -cvf - --totals -C / ./)
so tar backs
Have you tried escaping the spaces with a \ ? Like:
'/Application\ Data/'
Not sure if that will work, but it sounds like it's worth a shot.
brien
Jim McNamara wrote:
Hello again list!
I'm running into some trouble with excluding directories in rsyncd.conf
on a windows machine. The mach
/" "/Jennie\ and\ Andy/Temp/"
I didn't think it would be necessary on the windows machine as it
handled c:\Documents and Settings without special regards to the
whitespace in the path, but figured it was better safe than sorry.
Unfortunately, it still grabs the entire
It sounds a lot like you've hit some bugs in cygwin/rsync/smb bugs?
from the faq:
Smbclient is limited to 4GB file sizes. Moreover, a bug in
smbclient
(mixing signed and unsigned 32 bit values) causes it to incorrectly
do the tar octal conversion for file sizes from 2GB-4GB.
BackupPC_tarExt
I don't think another instance of backuppc would work "very well" for a
number of reasons. However, I think you could do well with copying the
raw block device over netcat or ssh. If you are using LVM for the
backuppc data you could take a snapshot and not affect regular backups,
otherwise yo
Mario Giammarco wrote:
> Hello,
> I would like backuppc in my complex situation:
>
> - I have some real clients and some virtual clients (openvz based) to backup
>
You can treat the VMs just like you would regular physical hosts, if
you'd like to keep things simple.
> - The virtual linux are on
Ludovic Gele wrote:
Selon Brien Dieterle <[EMAIL PROTECTED]>:
I don't think another instance of backuppc would work "very well" for a
number of reasons. However, I think you could do well with copying the
raw block device over netcat or ssh. If you
It sounds like you are describing rsync building the file list before
transferring starts, which takes a long time, and there really isn't
much you can do about that. One thing you might try is splitting your
data up with multiple RsyncShareNames. I'm thinking that might help
avoid ALRM, but
Jason Hughes wrote:
> Evren Yurtesen wrote:
>
>> Jason Hughes wrote:
>>
>>> That drive should be more than adequate. Mine is a 5400rpm 2mb
>>> buffer clunker. Works fine.
>>> Are you running anything else on the backup server, besides
>>> BackupPC? What OS? What filesystem? How many
Evren Yurtesen wrote:
David Rees wrote:
On 3/27/07, Les Mikesell <[EMAIL PROTECTED]> wrote:
Evren Yurtesen wrote:
What is wall clock time for a run and is it
reasonable for having to read through both the client and server copies?
Here are some benchmarks I ran last week: I think it's important
to balance the -s with the -n numbers so that you are
dealing with the same amount of data, otherwise caching can bite you
and you can have misleading results. Therefore, I used 10k file-size,
and adjusted the number of files to
I don't see mine either, I think it's normal. It wouldn't make sense to
show that type of information (# of incrementals, etc) for an archive
host, I don't think...
brien
benjamin thielsen wrote:
> hi-
>
> i'm having what is probably a basic problem, but i'm not sure where
> to look next in
It sounds very much like a hardware problem, perhaps slightly toasted
ide controllers? It sounds like a commodity box, can you move all the
disks to another machine and fire it up? Oh and go get a decent UPS! :-)
brien
Klaas Vantournhout wrote:
> Dear all,
>
> The real questions are at the bo
To not answer your question; why don't you just let the new machine use
the existing configs, assuming you keep a few fulls you'll still have
access to the old files just the same. You could also archive it if you
really wanted to preserve it as-is. Basically, what I'm saying is OS
changes sh
I've already been down this road, unfortunately. It's not scenic.
You can do something with predump to do "find . -iname *.doc
>/tmp/filelist"
and then modify your tar command to use tar -T /tmp/filelist.
Be warned, this totally messes up backuppc's notions of how
incrementals work, and how
I think you want -H to the rsync command. Or just dd if=/dev/sda
of=/dev/sdb. Or just use archive host w/ parity. I recommend making
/dev/sdb an external drive and only connect it when doing backups.
brien
YOUK Sokvantha wrote:
> Dear All,
>
> I installed Backuppc 2.1.2-2ubuntu5 on Ubuntu 5
You may be a bit off track with the levels... I'd leave that alone (0?)
until you've mastered the basics.
Take a look at Conf{FullKeepCnt}
with conf{FullPerfiod} of 6.97...
Conf{FullKeepCnt} = [ 1, 0, 0, 1 ];
would give you a weekly, and a 2 month full. A 3 month interval would
be tricky...
I have been struggling with this for a few weeks now...
Server: debian sarge: backuppc 2.1.1-2
Client: OSX Tiger 10.4.2 "Server"
using the new tiger tar, or xtar, I get the same results:
Everything transfers along just fine until it hits my netboot images.
It transfers about 6.5 gigs of a 12 gi
You might want to just use good ole' tar in 10.4. It will preserve
resource forks (unlike rsync) by creating AppleDouble files inside the
tar archive.
You might also want to disable ACLs if by some odd chance you have them
enabled. Here is what I use with modest success:
$Conf{TarClientCmd}
29 matches
Mail list logo