HighFree: 11877028 kB LowFree: 391020 kB HighFree: 11761892 kB LowFree: 342380 kB HighFree: 11654316 kB LowFree: 315860 kB HighFree: 11578756 kB LowFree: 291928 kB HighFree: 11490936 kB LowFree: 264788 kB
That's at the end. I fail to see the enomem. Plenty of lowfree and highfree. Some of the slabs do have high counts, but this is a big box. What is crashing? Is the server oopsing? oom-kill? Or, is the user-space process erroring out? Paul Jimenez wrote: > I have that complete file - from before rsync to the crash (~ 4MB) at > http://www.rgmadvisors.com/~pj/memslabinfo. > > Kernel is 2.6.16.7 vanilla, and the version of ocfs2 it came with. > > --pj > > > On Jun 29, 2006, at 2:10 PM, Sunil Mushran wrote: > > >> I would like the entire /proc/meminfo and /proc/slabinfo. >> Dump it to a file every 1 min or so. >> >> What version of the kernel/ocfs2? >> >> Paul Jimenez wrote: >> >>> On Jun 29, 2006, at 8:22 AM, Brian Long wrote: >>> >>> >>> >>>> On Wed, 2006-06-28 at 17:03 -0500, Paul Jimenez wrote: >>>> >>>> >>>>> I'm getting out of memory errors trying to do 'rsync -av /foo /bar' >>>>> where /foo is a local dir and /bar is an ocfs2 filesystem >>>>> running on >>>>> an ~ 6T ATA-over-Ethernet box. >>>>> >>>>> >>>> Paul, >>>> >>>> Can you also include some information about your /foo >>>> partition? It is >>>> millions of little files or hundreds of large files? What is >>>> the RSS of >>>> rsync when you run out of memory? >>>> >>>> http://samba.anu.edu.au/rsync/FAQ.html#5 >>>> http://lists.samba.org/archive/rsync/2002-July/003160.html >>>> >>>> >>>> >>> /foo is ~ 4600 files each about 60GB for a total of ~259GB. >>> >>> Some output after or slightly-before it crashed: >>> >>> >>> Every 2s: cat /proc/slabinfo | sort -rnk 2 | >>> head Thu Jun 29 11:58:01 2006 >>> >>> buffer_head 754620 754632 52 72 1 : tunables >>> 120 60 8 : slabdata 10481 10481 >>> 0 >>> bio 225600 225600 128 30 1 : tunables >>> 120 60 8 : slabdata 7520 7520 >>> 0 >>> biovec-1 225593 225736 16 203 1 : tunables >>> 120 60 8 : slabdata 1112 1112 >>> 0 >>> journal_head 175548 182448 52 72 1 : tunables >>> 120 60 8 : slabdata 2530 2534 >>> 0 >>> aoe_bufs 112536 112554 48 78 1 : tunables >>> 120 60 8 : slabdata 1443 1443 >>> 0 >>> radix_tree_node 41510 41510 276 14 1 : tunables >>> 54 27 8 : slabdata 2965 2965 >>> 0 >>> sysfs_dir_cache 3644 3772 40 92 1 : tunables >>> 120 60 8 : slabdata 41 41 >>> 0 >>> size-32 2938 4407 32 113 1 : tunables >>> 120 60 8 : slabdata 39 39 >>> 0 >>> size-64 2354 2596 64 59 1 : tunables >>> 120 60 8 : slabdata 44 44 >>> 0 >>> dentry_cache 2086 3090 128 30 1 : tunables >>> 120 60 8 : slabdata 103 103 >>> 0 >>> >>> >>> Free swap: 16779608kB >>> 4718592 pages of RAM >>> 4489216 pages of HIGHMEM >>> 562809 reserved pages >>> 530215 pages shared >>> 0 pages swap cached >>> 136994 pages dirty >>> 61878 pages writeback >>> 142502 pages mapped >>> 29403 pages slab >>> 480 pages pagetables >>> >>> 4718592 pages of RAM >>> 4489216 pages of HIGHMEM >>> 562809 reserved pages >>> 530215 pages shared >>> 0 pages swap cached >>> 136994 pages dirty >>> 61876 pages writeback >>> 142502 pages mapped >>> 29425 pages slab >>> 480 pages pagetables >>> >>> I don't think it's rsync running things oom; its memory >>> consumption is filecount based and 4600 files just isn't that many. >>> >>> The tunables that I had in place from the AoE faq (http:// >>> www.coraid.com/support/linux/EtherDrive-2.6-HOWTO.html#toc5.18) >>> this time were: >>> >>> vm.overcommit_memory=2 >>> vm.dirty_ratio=3 >>> vm.dirty_background_ratio=3 >>> vm.min_free_kbytes=5120 >>> >>> Any help appreciated. >>> >>> --pj >>> >>> _______________________________________________ >>> Ocfs2-users mailing list >>> Ocfs2-users@oss.oracle.com >>> http://oss.oracle.com/mailman/listinfo/ocfs2-users >>> >>> > > > _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs2-users > _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users