On Thu, Nov 16, 2017 at 02:37:40PM +0100, Sebastien Marie wrote:
> Hi,
>
> I am working on new lang/rust version (next stable in 1 week), and I
> have problem building it under i386 (full dmesg below).
>
> I suspect memory pressure in some way, but I dunno options I have to
> workaround (if possible).
>
> It is possible that occasionnal failures sthen@ saw in bulk with current
> rustc version (1.21) in ports to be related. The version 1.22 hits it
> almost at every build try.
>
> In short:
>
> - the build aborts due to ENOMEM (mmap(2) calls return ENOMEM. the program
> asked for 4096 bytes)
> - kdump(8) reports lot of
> mmap(0,0x1000,0x3<PROT_READ|PROT_WRITE>,0x1002<MAP_PRIVATE|MAP_ANON>,-1,0)
> calls
> - at ENOMEM time, the rustc process has SIZE and RES (from top(1)) to almost
> 1.7 Go
> - the ulimit data was 3145728 (3Go)
> - the host has 4Go RAM installed and dmesg reports avail mem = 3149758464
> (3003MB)
> - the host has swap, but system doesn't seems to use it at any time
> - rustc is threaded (but only one thread actively works)
>
As the problem seems to be still present with the new bootstrap I made,
some new elements.
$ ulimit -d
3145728
$ /usr/bin/time -l make:
1758856 maximum resident set size
0 average shared memory size
0 average unshared data size
0 average unshared stack size
1020823 minor page faults
6132 major page faults
0 swaps
0 block input operations
359 block output operations
0 messages sent
0 messages received
17 signals received
402 voluntary context switches
4453 involuntary context switches
During the build (just before the program got ENOMEM):
# vmstat -s
4096 bytes per page
767991 pages managed
104830 pages free
449136 pages active
33933 pages inactive
0 pages being paged out
17 pages wired
13022 pages zeroed
4 pages reserved for pagedaemon
6 pages reserved for kernel
754287 swap pages
0 swap pages in use
0 total anon's in system
0 free anon's
510782856 page faults
514110566 traps
9052390 interrupts
50707681 cpu context switches
1231492 fpu context switches
39836732 software interrupts
704863738 syscalls
0 pagein operations
336148 forks
141768 forks where vmspace is shared
32 kernel map entries
304156402 zeroed page hits
20283426 zeroed page misses
0 number of times the pagedaemon woke up
0 revolutions of the clock hand
0 pages freed by pagedaemon
0 pages scanned by pagedaemon
0 pages reactivated by pagedaemon
0 busy pages found by pagedaemon
1036242115 total name lookups
cache hits (95% pos + 1% neg) system 0% per-directory
deletions 0%, falsehits 0%, toolong 0%
1086 select collisions
The same command when the host is idle:
# vmstat -s
4096 bytes per page
767991 pages managed
538799 pages free
11272 pages active
49343 pages inactive
0 pages being paged out
17 pages wired
67358 pages zeroed
4 pages reserved for pagedaemon
6 pages reserved for kernel
754287 swap pages
0 swap pages in use
0 total anon's in system
0 free anon's
511475524 page faults
514793949 traps
9278999 interrupts
54878805 cpu context switches
1235256 fpu context switches
53226767 software interrupts
712310754 syscalls
0 pagein operations
339320 forks
141768 forks where vmspace is shared
33 kernel map entries
304482546 zeroed page hits
20283426 zeroed page misses
0 number of times the pagedaemon woke up
0 revolutions of the clock hand
0 pages freed by pagedaemon
0 pages scanned by pagedaemon
0 pages reactivated by pagedaemon
0 busy pages found by pagedaemon
1075570460 total name lookups
cache hits (95% pos + 0% neg) system 0% per-directory
deletions 0%, falsehits 0%, toolong 0%
1086 select collisions
What I noted:
> 104830 pages free
still some pages are free: 409 Mo
does some percent of pages are reserved for root (as inode on
filesystem) ?
> 17 pages wired
few pages wired in RAM. so others pages could go in swap if need
> 754287 swap pages
> 0 swap pages in use
lot of swap pages. no use of them.
I agree that rustc is a bit a big pig with memory. It is failing in
building librustc compoment which is a really big piece of code.
When tracing the execution, it is failing in "translate to LLVM IR"
step.
For now, I am slowly digging into uvm(9) to understand the problem. Any
pointer on documentation would be appreciate ! For now, I am reading
article and theise from Charles D. Cranor, as UVM comes from here.
deraadt@ pointed me to possible fragmentation problem (rustc is
allocating lot of block of memory) and to the fact that allocation
implies guard pages (it consumes more memory than requested)
But allocation is done by 4096 bytes (the page size), so allocator
should only try to find 2 consecutives free pages (1 for data + 1 for
guard).
Does guard pages are counted in vmstat(8) output ? if yes, there is
still memory available.
How swap is envolved ? when uvm decide to push some pages in swap ?
--
Sebastien Marie