cutting in 4 the backup done the trick, i also moved the part that took
the longest to tar. Curiously this is not the part with the most files
but the part with the most directoryies that takes so long to backup :)
anyway the 8Millions files are backed up now.
thanks for your help.
regards,
You could transfer to the backuppc host not into the pool but to a temp
directory and unpackage the tars there. This all via pre backup script.
Then backuppc steps in and creates a local backup of this temporary
files so you get the pooling. In the post backup script you flush this
temp files.
On Mon, Dec 19, 2011 at 2:27 AM, gagablub...@vollbio.de wrote:
You could transfer to the backuppc host not into the pool but to a temp
directory and unpackage the tars there. This all via pre backup script.
Then backuppc steps in and creates a local backup of this temporary
files so you get
On Mon, 2011-12-19 at 12:32 -0600, Les Mikesell wrote:
On Mon, Dec 19, 2011 at 12:04 PM, Jean Spirat jean.spi...@squirk.org wrote:
I directly mount the nfs share on the backuppc server so no need for
rsyncd here this is like local backup with the NFS overhead of course.
The whole point
sorry to take long to reply.
yes it saves me a lot of time, let me explain.
although I have a fast san and servers the time for fetching lots of small
files is high, the max bandwidth i could get was about 5MB/s, increasing
concurrecy i can get about 20-40MB/s depending on what im backingup at
Why don't you ask the developers to write a script that creates one or a
few tar files out of this massive number of files?
The execution of that script could be triggered via http request (with
authentification). On the backuppc side you could call this script via
pre backup command before
you may try to use a rsyncd directly on the server. This may speed up
things.
another thing is to split the large backup into several smaller ones. I've
an email cluster with 8TB and millions of small files (I'm using dovecot),
theres also a san involved. in order to use all the bandwidth
I'd rather deal with a few tarfiles, too, but you'll lose pooling...
Unless the script that makes the tarfiles is intelligent. In which case
BackupPC is somewhat overkill.
Basically, your choices are poor no matter what. Garbage in, garbage out,
and all that...
Timothy J. Massey
Out of the Box
hi,
I use backuppc to save a webserver. The issue is that the application
used on it is making thousand of little files used for a game to create
maps and various things. The issue is that we are now at 100GB of data
and 8.030.000 files so the backups takes 48H and more (to help the files
On Fri, 2011-12-16 at 10:42 +0100, Jean Spirat wrote:
hi,
I use backuppc to save a webserver. The issue is that the application
used on it is making thousand of little files used for a game to create
maps and various things. The issue is that we are now at 100GB of data
and 8.030.000
r.
I would suggest you try the following:
Move to tar over ssh on the remote webserver, the first full backup
might well take a long time but the following ones should be faster.
tar+ssh backups however use more bandwidth but as you are already using
nfs I am assuming you are on a local
On Fri, 2011-12-16 at 11:49 +0100, Jean Spirat wrote:
I would suggest you try the following:
tar+ssh backups however use more bandwidth but as you are already using
nfs I am assuming you are on a local network of some sort.
for my understanding rsync had allways seems to be the most
On Fri, Dec 16, 2011 at 4:49 AM, Jean Spirat jean.spi...@squirk.org wrote:
Hum i cannot directly use the FS i have no access to the NFS server that
is on the hosting company side i just have access to the webserver that
use the nfs partition to store it's content. Right now i also mount the
On Fri, Dec 16, 2011 at 4:42 AM, Jean Spirat jean.spi...@squirk.org wrote:
The issue is that we are now at 100GB of data
and 8.030.000 files so the backups takes 48H and more (to help the files
are on NFS share). I think i come to the point where file backup is at
it's limit.
What about a
On Fri, 2011-12-16 at 07:33 -0600, Les Mikesell wrote:
On Fri, Dec 16, 2011 at 4:49 AM, Jean Spirat jean.spi...@squirk.org wrote:
for my understanding rsync had allways seems to be the most efficient
of the two but i never challenged this fact ;p
Rsync working natively is very efficient,
Hi,
On Friday 16 December 2011 10:42:00 Jean Spirat wrote:
I use backuppc to save a webserver. The issue is that the application
used on it is making thousand of little files used for a game to create
maps and various things. The issue is that we are now at 100GB of data
and 8.030.000 files
Excuse my off topic-ness, but with that many small files I kind of expect a
filesystem to reach certain limits. Why is that webapp written to use many
little files? Why not with a database where all that stuff is in blobs?
That whould be easier to maintain and easier to back up.
Have fun,
On Fri, Dec 16, 2011 at 9:00 AM, Jean Spirat jean.spi...@squirk.org wrote:
Excuse my off topic-ness, but with that many small files I kind of expect a
filesystem to reach certain limits. Why is that webapp written to use many
little files? Why not with a database where all that stuff is in
18 matches
Mail list logo