> * We need to find out why the oom-killer is not killing things fast > enough. Based on our results, we might consider configuring > /proc/$pid/oom_adj to preferentially kill some processes (e.g., the > foreground [or background?] activities.)
In the cases I've been playing with, browse is the only activity that is running. Will try bumping its oom_adj to see if this improves OOM kill latency. > * We need to determine whether the oom-killer is killing the right > processes. (sysctl's vm.oom_dump_tasks can be set to 1 in order to > get more verbosity from the oom-killer when it fires). >From watching top, it appears that we're killing the correct process. For example, when running the test case from #8316, OOM killer does not kill browse, but just kills the gnash instance which is chewing up RAM. > - the warnings in the ramfs and tmpfs code about the deadlocks that > tmpfsen can generate under low- or no-memory conditions. I have yet to see an actual deadlock. What I saw when trying to reproduce #3816 is that the OOM killer just takes a very very long time to kick in. > - whether our kernel "overcommits" when allocation requests are made? By default vm.overcommit_memory is set to 0 which will refuse "Obvious overcommits of address space". I will try setting this to 3 along with vm.overcommit_ratio to 0 to force no overcommit at all and see how the system reacts. ~Deepak _______________________________________________ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel