https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #14 from Roland Haberkorn ---
After the new version made it into my system I can confirm it works like a
charm. Many thanks for the effort.
--
You are receiving this mail because:
You are the QA Contact for the bug.
--
Please use
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #13 from MulticoreNOP ---
(In reply to Wayne Davison from comment #12)
Hi Wayne,
that is great news!
Could you shine some light on why there is such a limit in the first place?
Personally I think such an arbitrary limit is rather
https://bugzilla.samba.org/show_bug.cgi?id=12769
Wayne Davison changed:
What|Removed |Added
Resolution|--- |FIXED
Status|NEW
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #11 from MulticoreNOP ---
I want to add that the original implementation also leads to the following
error:
ERROR: out of memory in flist_expand [sender]
rsync error: error allocating core memory buffers (code 22) at util2.c(106)
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #10 from MulticoreNOP ---
(In reply to Simon Matter from comment #7)
#define MALLOC_MAX 0x1
is greater than uint32-MAX, therefore will overflow and result in an
unpredictable and unfriendly manner.
#define MALLOC_MAX
https://bugzilla.samba.org/show_bug.cgi?id=12769
MulticoreNOP changed:
What|Removed |Added
CC||multicore...@mailbox.org
--- Comment #9
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #8 from Roland Haberkorn ---
Is it possible to totally get rid of the restriction? I'd prefer running in out
of memory situations rather than in this restriction. Without I could just
throw some more RAM on the machine, with this
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #7 from Simon Matter ---
I've patches like this to solve our issues:
--- rsync-3.1.3/util2.c.orig2018-01-15 04:55:07.0 +0100
+++ rsync-3.1.3/util2.c2020-03-11 13:07:07.138141415 +0100
@@ -59,7 +59,7 @@
return
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #6 from Dave Gordon ---
The hash table doubles each time it reaches 75% full. A hash table for 32m
items @ 16 bytes each (8 byte key, 8 byte void *data) needs 512MB of memory. At
the next doubling (up to 64m items) it hits the array
On 11/10/2019 13:53, just subscribed for rsync-qa from bugzilla via
rsync wrote:
> https://bugzilla.samba.org/show_bug.cgi?id=12769
>
> --- Comment #5 from Simon Matter ---
> I'm suffering the same problem and was wondering if anyone found a solution or
> work around or other tool to do the job?
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #5 from Simon Matter ---
I'm suffering the same problem and was wondering if anyone found a solution or
work around or other tool to do the job?
First I thought maybe it's a bug which is fixed already and tried with the
latest release
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #4 from Ovidiu Stanila ---
We hit the same issue on a CentOS 6 server (kernel 2.6.32-754.18.2.el6.x86_64),
the sync would break with the following error:
# /usr/bin/rsync --debug=HASH --stats --no-inc-recursive -aHn --delete /app/
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #3 from Roland Haberkorn ---
Ok, I digged somewhat deeper. I've found a second difference between my two
sources. The one is the original data, the other one is a diffential rsync
backup with hard links.
I then built a
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #2 from Roland Haberkorn ---
I did some further investigation...
First thing to add: The ext4 file systems are hard-linked differential rsync
backups of the real data on XFS.
I changed the testcase by deleting the
https://bugzilla.samba.org/show_bug.cgi?id=12769
--- Comment #1 from Roland Haberkorn ---
If you want me to run further testings with other file systems I am totally
willing to produce fake data and run tests. I just haven't done yet because of
my lack of knowledge about the
15 matches
Mail list logo