N.J. van der Horn (Nico) wrote:
Right, but it has to be done in a separate pass if you're to compare
all files with each other, not just one destination file. And you
need all the RAM, too. It's like the worst case of rsync -H.
What I tried to point out is that when the DB is updated
Jamie Lokier schreef:
N.J. van der Horn (Nico) wrote:
But you need to verify and update the DB contents - which requires
stat on all the files mentioned in the DB. In other words you might
have to scan everything :-)
This already takes place while Rsync does its job, so it has
David Howe schreef:
Jamie Lokier wrote:
There are methods to perform efficient updates of large numbers of
files and a large amount of data, across simultaneous renames, copies
and edits. But that is the realm of similarity detection indexing,
which is beyond the scope of rsync. At least
Jamie Lokier schreef:
David Howe wrote:
Jamie Lokier wrote:
I am less worried about individual file renames and/or missing the
opportunity to diff a large file that has been both moved and updated,
than having to resync multiple gigs of stuff over a slow link, because
some user renamed a
.
The worse case problem by tackling renamed files and directories is when
they are not only moved
or renamed, but when they are also changed in contents.
You probably noticed Henri's reply, he pointed to link-backup:
This URL has a links to Link-Backup and LSync :
http://connect.homeunix.com
without doing a full scan first.
But you need to verify and update the DB contents - which requires
stat on all the files mentioned in the DB. In other words you might
have to scan everything :-)
The worse case problem by tackling renamed files and directories is
when they are not only moved
by tackling renamed files and directories is
when they are not only moved or renamed, but when they are also
changed in contents.
In some ways that's equivalent to transferring one *very large* file
with small edits, efficiently. Renames of small files map to
rearranging data in the large file. Just
N.J. van der Horn (Nico) wrote:
But you need to verify and update the DB contents - which requires
stat on all the files mentioned in the DB. In other words you might
have to scan everything :-)
This already takes place while Rsync does its job, so it has not to be
done separately.
Jamie Lokier wrote:
There are methods to perform efficient updates of large numbers of
files and a large amount of data, across simultaneous renames, copies
and edits. But that is the realm of similarity detection indexing,
which is beyond the scope of rsync. At least with the present
David Howe wrote:
Jamie Lokier wrote:
I am less worried about individual file renames and/or missing the
opportunity to diff a large file that has been both moved and updated,
than having to resync multiple gigs of stuff over a slow link, because
some user renamed a directory.
An approximate
Jamie Lokier wrote:
David Howe wrote:
Jamie Lokier wrote:
I am less worried about individual file renames and/or missing the
opportunity to diff a large file that has been both moved and updated,
than having to resync multiple gigs of stuff over a slow link, because
some user renamed a
David Howe wrote:
N.J. van der Horn (Nico) wrote:
What is the current status of both rename-patches ?
Are there alternative measures ?
Frequently users reorganise directories and files.
Recently a directory of 40GB was renamed...
It took 3 weeks to re-copy all over an ADSL-link.
Hmmm, right, IF and only IF you notice the rename at the source on time,
you can do so at destination.
But in practise, I see its getting more and more impossible to keep up
with the growing number of hosts.
Just keeping a DB with characteristics like checksum seems to be not the
ultimate
N.J. van der Horn (Nico) wrote:
Hmmm, right, IF and only IF you notice the rename at the source on
time, you can do so at destination. But in practise, I see its
getting more and more impossible to keep up with the growing number
of hosts. Just keeping a DB with characteristics like checksum
N.J. van der Horn (Nico) wrote:
What is the current status of both rename-patches ?
Are there alternative measures ?
Frequently users reorganise directories and files.
Recently a directory of 40GB was renamed...
It took 3 weeks to re-copy all over an ADSL-link.
I have followed the last
a full sync is done to ensure nothing is missed.
But my biggest concern is the renamed files and directories.
Right now a renamed directory appears to rsync as being removed.
Then the new name is recognised as new and a full sync is carried out.
I am using the backup-dir option to create daily
On Mon, Feb 16, 2009 at 10:14:25AM +0100, N.J. van der Horn (Nico) wrote:
What is the current status of both rename-patches ?
Are there alternative measures ?
I'm not thrilled with how the rename patches work, especially since they
disable incremental recursion. As such, I'm hoping to change
Thanks, can you give me some pointer ?
Nico
henri schreef:
There is a project called link-backup which may be worth a look.
I am not sure if this will help you at all, but check it out.
All the best. Hope this helps.
What is the current status of both rename-patches ?
Are there alternative
What is the current status of both rename-patches ?
Are there alternative measures ?
Frequently users reorganise directories and files.
Recently a directory of 40GB was renamed...
It took 3 weeks to re-copy all over an ADSL-link.
I have followed the last couple of years the postings,
and
There is a project called link-backup which may be worth a look.
I am not sure if this will help you at all, but check it out.
All the best. Hope this helps.
What is the current status of both rename-patches ?
Are there alternative measures ?
Frequently users reorganise directories and
20 matches
Mail list logo