We are rsync'ing large (hundreds of GB) and constantly changing Berkley DB
(aka Sleepycat) datasets (the RPM database uses the same thing, but its
dataset is extremely small). When a change occurs (insert, update, delete,
etc) in a BDB it has a tendency to propagate through the binary database
Hi,
I am a new user and tried at least some of the recommended ways to find
an answer to my question (search the net, ask IT professionals). But
when everything fails nothing beats newsgroups...
Here is my problem. I compiled rsync 2.6.2 on a Tru64 v5.1b system. I am
trying to use rsync to bac
Hi,
rsync gzip file in not so good idea!
Because of gzip algorithm, a small use change of the original file will
cause a big change of the gzip file.
That's mean rsync will copy the whole file after each change! , not what you
really want !
To over come this you can use special version on gzip, it
On Fri, Jul 16, 2004 at 04:48:55PM -0400, Chris Shoemaker wrote:
> what's this part about standard input?
It reads the batch data from stdin (the current manpage says "list"
where it should say "batch data"). This is quite useful, as seen
in the example in the manpage:
$ rsync --write-batch=
On Fri, Jul 16, 2004 at 08:20:51PM -0400, Chris Shoemaker wrote:
> On Thu, Jul 15, 2004 at 07:06:28PM -0700, Wayne Davison wrote:
> > + max_map_size = MIN(MAX_MAP_SIZE, blength * 32);
>
> This makes max_map_size a multiple (32) of blength
> for a large range (blength*32 < MAX_MAP_SIZE),
Oops,