[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #14 from Roland Haberkorn --- After the new version made it into my system I can confirm it works like a charm. Many thanks for the effort. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #13 from MulticoreNOP --- (In reply to Wayne Davison from comment #12) Hi Wayne, that is great news! Could you shine some light on why there is such a limit in the first place? Personally I think such an arbitrary limit is rather unexpected and many people will have a long running rsync command (for me each try took ~6hrs until I could reproduce the error) fail on them until they could even be aware that there is such a limit. In addition to that, I guess only a small fraction of those people will probably find this parameter as a solution to their problem. To me this is like 'cp -R a b' failing, if a/ contains more than 1337 files. Cheers, Mc NOP -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 Wayne Davison changed: What|Removed |Added Resolution|--- |FIXED Status|NEW |RESOLVED --- Comment #12 from Wayne Davison --- I fixed the allocation args to be size_t values (and improved a bunch of allocation error checking while I was at it). I then added an option that lets you override this allocation sanity-check value. The default is still 1G per allocation, but you can now specify a much larger value (up to "--max-alloc=8192P-1"). If you want to make a larger value the default for your copies, export RSYNC_MAX_ALLOC in the environment with the size value of your choice. Committed for release in 3.2.2. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #11 from MulticoreNOP --- I want to add that the original implementation also leads to the following error: ERROR: out of memory in flist_expand [sender] rsync error: error allocating core memory buffers (code 22) at util2.c(106) [sender=3.1.2] For me it hit that message for an rsync --delete-before -H --moreStuff.. at around the 117 million files mark. The patch fixed that problem for me. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #10 from MulticoreNOP --- (In reply to Simon Matter from comment #7) #define MALLOC_MAX 0x1 is greater than uint32-MAX, therefore will overflow and result in an unpredictable and unfriendly manner. #define MALLOC_MAX 0xD09DC300 (~3,5GiB) leaves some space to detect a "just too big for this implementation" and will fail gracefully. Yet, the real culprit here is the use of "unsigned int" as opposed to size_t. I therefore propose the attached patch that removes MALLOC_MAX in its entirety and should allow arrays as big as available virtual memory can support. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 MulticoreNOP changed: What|Removed |Added CC||multicore...@mailbox.org --- Comment #9 from MulticoreNOP --- Created attachment 16074 --> https://bugzilla.samba.org/attachment.cgi?id=16074=edit remove MALLOC_MAX from util2.c change data types from "unsigned int" to "size_t" as it should be and remove arbitrary size limit. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #8 from Roland Haberkorn --- Is it possible to totally get rid of the restriction? I'd prefer running in out of memory situations rather than in this restriction. Without I could just throw some more RAM on the machine, with this restriction I would have to rebuild rsync whenever there is a newer version. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #7 from Simon Matter --- I've patches like this to solve our issues: --- rsync-3.1.3/util2.c.orig2018-01-15 04:55:07.0 +0100 +++ rsync-3.1.3/util2.c2020-03-11 13:07:07.138141415 +0100 @@ -59,7 +59,7 @@ return True; } -#define MALLOC_MAX 0x4000 +#define MALLOC_MAX 0x1 void *_new_array(unsigned long num, unsigned int size, int use_calloc) { -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #6 from Dave Gordon --- The hash table doubles each time it reaches 75% full. A hash table for 32m items @ 16 bytes each (8 byte key, 8 byte void *data) needs 512MB of memory. At the next doubling (up to 64m items) it hits the array allocator limit in utils2.c: #define MALLOC_MAX 0x4000 void *_new_array(unsigned long num, unsigned int size, int use_calloc) { if (num >= MALLOC_MAX/size) return NULL; return use_calloc ? calloc(num, size) : malloc(num * size); } void *_realloc_array(void *ptr, unsigned int size, size_t num) { if (num >= MALLOC_MAX/size) return NULL; if (!ptr) return malloc(size * num); return realloc(ptr, size * num); } No single array allocation or reallocation is allowed to exceed MALLOC_MAX (1GB). Hence rsync can only handle up to 32m items per invocation if a hash table is required (e.g. for tracking hardlinks when -H is specified). HTH, Dave -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: [Bug 12769] error allocating core memory buffers (code 22) depending on source file system
On 11/10/2019 13:53, just subscribed for rsync-qa from bugzilla via rsync wrote: > https://bugzilla.samba.org/show_bug.cgi?id=12769 > > --- Comment #5 from Simon Matter --- > I'm suffering the same problem and was wondering if anyone found a solution or > work around or other tool to do the job? > > First I thought maybe it's a bug which is fixed already and tried with the > latest release 3.1.3. Unfortunately no joy and I still get the same issue. > > Any help would be much appreciated! The hash table doubles each time it reaches 75% full. A hash table for 32m items @ 16 bytes each (8 byte key, 8 byte void * data) needs 512MB of memory. At the next doubling (up to 64m items) it hits the array allocator limit in utils2.c: #define MALLOC_MAX 0x4000 void *_new_array(unsigned long num, unsigned int size, int use_calloc) { if (num >= MALLOC_MAX/size) return NULL; return use_calloc ? calloc(num, size) : malloc(num * size); } void *_realloc_array(void *ptr, unsigned int size, size_t num) { if (num >= MALLOC_MAX/size) return NULL; if (!ptr) return malloc(size * num); return realloc(ptr, size * num); } No single array allocation or reallocation is allowed to exceed MALLOC_MAX (1GB). Hence rsync can only handle up to 32m items per invocation if a hash table is required (e.g. for tracking hardlinks when -H is specified). HTH, Dave -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #5 from Simon Matter --- I'm suffering the same problem and was wondering if anyone found a solution or work around or other tool to do the job? First I thought maybe it's a bug which is fixed already and tried with the latest release 3.1.3. Unfortunately no joy and I still get the same issue. Any help would be much appreciated! -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #4 from Ovidiu Stanila --- We hit the same issue on a CentOS 6 server (kernel 2.6.32-754.18.2.el6.x86_64), the sync would break with the following error: # /usr/bin/rsync --debug=HASH --stats --no-inc-recursive -aHn --delete /app/ :/app/ [sender] created hashtable 2013770 (size: 16, keys: 64-bit) [sender] created hashtable 2015370 (size: 512, keys: 64-bit) [sender] growing hashtable 2015370 (size: 1024, keys: 64-bit) [sender] growing hashtable 2015370 (size: 2048, keys: 64-bit) [sender] growing hashtable 2015370 (size: 4096, keys: 64-bit) [sender] growing hashtable 2015370 (size: 8192, keys: 64-bit) [sender] growing hashtable 2015370 (size: 16384, keys: 64-bit) [sender] growing hashtable 2015370 (size: 32768, keys: 64-bit) [sender] growing hashtable 2015370 (size: 65536, keys: 64-bit) [sender] growing hashtable 2015370 (size: 131072, keys: 64-bit) [sender] growing hashtable 2015370 (size: 262144, keys: 64-bit) [sender] growing hashtable 2015370 (size: 524288, keys: 64-bit) [sender] growing hashtable 2015370 (size: 1048576, keys: 64-bit) [sender] growing hashtable 2015370 (size: 2097152, keys: 64-bit) [sender] growing hashtable 2015370 (size: 4194304, keys: 64-bit) [sender] growing hashtable 2015370 (size: 8388608, keys: 64-bit) [sender] growing hashtable 2015370 (size: 16777216, keys: 64-bit) [sender] growing hashtable 2015370 (size: 33554432, keys: 64-bit) ERROR: out of memory in hashtable_node [sender] rsync error: error allocating core memory buffers (code 22) at util2.c(106) [sender=3.1.2] Both the sender and receiver part have the same rsync and OS versions. The source we try to transfer is over 3.4Tb (65 million files) with a big number of hard links between various directories(for avoiding duplicate files). We initially increased the memory from 16Gb to 32Gb but even with that rsync would die with the same error. On above example, there were over 22Gb free RAM at the point rsync stopped working. We also tried "--inplace" instead of "--no-inc-recursive" with the same result. Did we hit some kind of limitation inside of rsync ? Is there anything else we should check ? -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #3 from Roland Haberkorn--- Ok, I digged somewhat deeper. I've found a second difference between my two sources. The one is the original data, the other one is a diffential rsync backup with hard links. I then built a testcase with about 50 million dummy files with something like this: #!/bin/bash for i in {1..50} do mkdir $i #cd $i for j in {1..1000} do mkdir $i/$j #cd $j for k in {1..1000} do touch $i/$j/$k done done done Rsyncing this testfolder work fine from and to any of the tested file systems (ext4 64Bit, XFS, btrfs). This is true for with and without the Option -H and as long as in the source file system there is no hard linked copy of the source folder. In the moment when there is at least one hard linked copy the option -H breaks the run: roland@msspc25:~$ stat /mnt/rsynctestsource/1/1/1 File: /mnt/rsynctestsource/1/1/1 Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: 811h/2065d Inode: 153991349 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 2001/ roland) Gid: ( 100/ users) Access: 2017-05-09 10:54:10.341300841 +0200 Modify: 2017-05-08 09:33:51.535967423 +0200 Change: 2017-05-26 16:21:57.610628573 +0200 Birth: - roland@msspc25:~$ rsync -rlptgoDxAn --info=name,progress2 --delete --link-dest=/mnt2/link3/ /mnt/rsynctestsource/ /mnt2/link1/. 0 100%0.00kB/s0:00:00 (xfr#0, to-chk=0/49049050) roland@msspc25:~$ rsync -rlptgoDxHAn --info=name,progress2 --delete --link-dest=/mnt2/link3/ /mnt/rsynctestsource/ /mnt2/link1/. 0 100%0.00kB/s0:00:00 (xfr#0, ir-chk=1000/25191050) ERROR: out of memory in hashtable_node [sender] rsync error: error allocating core memory buffers (code 22) at util2.c(106) [sender=3.1.2] You can see, the first run without -H works, the last one with -H doesn't. So I would have to somewhat rename the bug report into "-H breaks the incremental recursion on hard linked sources". This is as well true for all the three file systems tested. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #2 from Roland Haberkorn--- I did some further investigation... First thing to add: The ext4 file systems are hard-linked differential rsync backups of the real data on XFS. I changed the testcase by deleting the --link-dest option. When rsyncing from an XFS, the rsync process on the client uses about 3% RAM (of total 8GB). When rsyncing from an ext4, it uses up to about 50% RAM. This picture totally changes when I delete the option -H. In this case also copying from an ext4 uses only less than 2% RAM. My guess would be that perhaps -H breaks the incremental recursion when copying from an ext4. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 12769] error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 --- Comment #1 from Roland Haberkorn--- If you want me to run further testings with other file systems I am totally willing to produce fake data and run tests. I just haven't done yet because of my lack of knowledge about the underlying mechanisms and because I am not totally sure whether this is a rsync's problem or a kernel issue. To add two more things: We saw this issue also when having mounted the data with NFSv3 or v4. The target file system does not matter, we've had this issue with btrfs, Ext4 and XFS. -- You are receiving this mail because: You are the QA Contact for the bug. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html