Toni writes:
> BackupPC full dump, with patch which removed --ignore-times for a full
> backup:
> Done: 507 files, 50731819 bytes
> full backup complete
> real13m39.796s
> user0m4.232s
> sys 0m0.556s
> Network IO used: 620MB
>
> 'rsync -auvH --ignore-times' on the same data:
> sent 48
Toni Van Remortel wrote:
> Les Mikesell wrote:
>> Toni Van Remortel wrote:
>>> Toni Van Remortel wrote:
Anyway, I'm preparing a separate test setup now, to be able to do
correct tests (so both BackupPC and an rsync tree are using data from
the same time).
Test results will be he
Les Mikesell wrote:
> Toni Van Remortel wrote:
>> Toni Van Remortel wrote:
>>> Anyway, I'm preparing a separate test setup now, to be able to do
>>> correct tests (so both BackupPC and an rsync tree are using data from
>>> the same time).
>>> Test results will be here tomorrow.
>>>
>> So that is
Toni Van Remortel wrote:
> Toni Van Remortel wrote:
>> Anyway, I'm preparing a separate test setup now, to be able to do
>> correct tests (so both BackupPC and an rsync tree are using data from
>> the same time).
>> Test results will be here tomorrow.
>>
> So that is today.
>
> BackupPC full du
Toni Van Remortel wrote:
> Anyway, I'm preparing a separate test setup now, to be able to do
> correct tests (so both BackupPC and an rsync tree are using data from
> the same time).
> Test results will be here tomorrow.
>
So that is today.
BackupPC full dump, with patch which removed --ignore-
nexenta is alive and well. in fact, check this out.
http://www.nexenta.com/corp/
nexenta is not advancing at the pace of ubuntu though. i like the ubuntu
system so nexenta is great for me. if i were you and you were not tied to
ubuntu then you might consider opensolaris or solaris10. solaris10
Gene Horodecki wrote:
> Sounds reasonable... What did you do about the attrib file? I noticed
> there is a file called 'attrib' in each of the pool directories with
> some binary data in it.
>
Nothing... it just contains permissions, etc. That's why I did another
full after the move -- then
Sounds reasonable... What did you do about the attrib file? I noticed
there is a file called 'attrib' in each of the pool directories with some
binary data in it.
"Rich Rauenzahn" <[EMAIL PROTECTED]> wrote:
> Gene Horodecki wrote:
>
>>
>>>
>>>I had that problem as well.. so I uhh..
Gene Horodecki wrote:
I had that problem as well.. so I uhh.. well, I fiddled with the backup
directory on the backuppc server and moved them around so that backuppc
wouldn't see I had moved them remotely.. Not something I would exactly
recommend doing... although it worked.
Great suggesti
> Perhaps you could fiddle with them to make them exactly the same...
> At least if you have the 3.x version you will be able to stop and
> restart the initial full if you have to while getting the first complete
> copy.
> I had that problem as well.. so I uhh.. well, I fiddled with the backup
> d
Gene Horodecki wrote:
> I fiddled with the paths of my biggest backup in order to simplify an
> offsite copy and now because the files aren't "exactly the same" it seems
> it's going to take as long as the very first backup which was 4x as long as
> subsequent fulls. Unfortunate, because all the
Gene Horodecki wrote:
> I fiddled with the paths of my biggest backup in order to simplify an
> offsite copy and now because the files aren't "exactly the same" it seems
> it's going to take as long as the very first backup which was 4x as long as
> subsequent fulls. Unfortunate, because all the f
I fiddled with the paths of my biggest backup in order to simplify an
offsite copy and now because the files aren't "exactly the same" it seems
it's going to take as long as the very first backup which was 4x as long as
subsequent fulls. Unfortunate, because all the files are there.. but they
nee
Les Mikesell wrote:
Gene Horodecki wrote:
Is this true? Why not just send the checksum/name/date/permissions of the
file first and see if it exists already and link it in if it does. If the
file does not exist by name but there is a checksum for the file, then just
use the vital data to lin
Gene Horodecki wrote:
>> I'm not sure what you mean by 'pool' here. The only thing relevant to
>> what a backuppc rsync transfer will copy is the previous full of the
>> same machine. Files of the same name in the same location will use the
>> rsync algorithm to decide how much, if any, data ne
dan wrote:
> the ZFS machine is an nextenta(opensolaris+ubuntu) machine with an
> athlon64x2 3800+ and 1Gb Ram with 2 240Gb sata drives in the array. its
> a dell e521
Is nexenta still an active project? And would you recommend using it?
--
Les Mikesell
[EMAIL PROTECTED]
---
the ZFS machine is an nextenta(opensolaris+ubuntu) machine with an
athlon64x2 3800+ and 1Gb Ram with 2 240Gb sata drives in the array. its a
dell e521
On Nov 27, 2007 9:33 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> Toni Van Remortel wrote:
>
> > But I don know that BackupPC does use more band
> I'm not sure what you mean by 'pool' here. The only thing relevant to
> what a backuppc rsync transfer will copy is the previous full of the
> same machine. Files of the same name in the same location will use the
> rsync algorithm to decide how much, if any, data needs to be copied -
> any
Toni Van Remortel wrote:
> But I don know that BackupPC does use more bandwidth.
> Besides: when dumping a full backup, the 'pool' means (I hope): file
> already in pool, using it. If not, then there is a problem, as those
> files are already in another backup set of the test host. But BackupPC
>
What kind of specs does your server have (besides running ZFS)? That is,
processor, memory, etc.
I've got a P-III 500Mhz with 512MB RAM as my backup server. It also is my
file server (I want to split those into separate machines, but I can't right
now), with about 250GB of data. (Most of that i
With rsync, the time required to do a backup depends as much on the number
of files as the total size of the data. For example, backing up an email
server with 20GB in 2 million files will take much longer than backing up
10 2GB isos.(*)
So "I backed up X GB in Y minutes" is meaningless without
I backup about 6-7Gb during a full backup of one of my sco unix servers
using rsync over ssh and it takes under an hour.
4-5Gb on an very old unix machine using rsync on an nfs mount takes just
over an hour.
full backups of my laptop is about 8Gb and takes about 15minutes though it
is on gigabit
Toni Van Remortel wrote:
And I have set up BackupPC here 'as-is' in the first place, but we saw
that the full backups, that ran every 7 days, took about 3 to 4 days
to
complete, while for the same hosts the incrementals finished in 1
hour.
That's why I got digging into the principles of Back
Les Mikesell wrote:
> How are you measuring the traffic?
ntop
Anyway, I'm preparing a separate test setup now, to be able to do
correct tests (so both BackupPC and an rsync tree are using data from
the same time).
Test results will be here tomorrow.
But I don know that BackupPC does use more band
Toni Van Remortel wrote:
>> Could you give us some numbers? How much traffic are you seeing for
>> a BackupPC backup compared to a 'plain rsync'?
> Full backup, run for the 2nd time today (no changes in files):
> - - BackupPC full dump : killed it after 30mins, as it pulled all data
> again (2.8G
PS: I hacked BackupPC to skip the '--ignore-times' argument addition for
full backups.
--
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 451 92 26 - [EMAIL PROTECTED]
-
This SF.net email is sponsore
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Nils Breunese (Lemonbit) wrote:
> It might be because BackupPC doesn't run the equivalent of rsync
> -auv. See $Conf{RsyncArgs} in your config.pl for the options used
> and remember rsync is talking to BackupPC's rsync interface, not a
> stock rsync.
T
Toni Van Remortel wrote:
>>> How can I reduce bandwidth usage for full backups?
>>>
>>> Even when using rsync, BackupPC does transfer all data on a full backup,
>>> and not only the modified files since the last incremental or full.
>> That's not true. Only modifications are transfered over the ne
Toni Van Remortel wrote:
Nils Breunese (Lemonbit) wrote:
Toni Van Remortel wrote:
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full
backup,
and not only the modified files since the last incremental or full.
That's not true
Nils Breunese (Lemonbit) wrote:
> Toni Van Remortel wrote:
>> How can I reduce bandwidth usage for full backups?
>>
>> Even when using rsync, BackupPC does transfer all data on a full backup,
>> and not only the modified files since the last incremental or full.
> That's not true. Only modification
Toni Van Remortel wrote:
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full
backup,
and not only the modified files since the last incremental or full.
That's not true. Only modifications are transfered over the network
whe
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full backup,
and not only the modified files since the last incremental or full.
I would love to see BackupPC performing this simple task:
- cp -al $last new
- rsync -au --delete host:/s
32 matches
Mail list logo