On Thu, Jun 12, 2014 at 03:00:17PM +0200, Stefan Bader wrote: > When reading from /proc/stat we allocate a large buffer to maximise > the chances of the results being from a single run and thus internally > consistent. This currently is sized at 128 * num_possible_cpus() which, > in the face of kernels sized to handle large configurations (256 cpus > plus), results in the buffer being an order-4 allocation or more. > When system memory becomes fragmented these cannot be guarenteed, leading > to read failures due to allocation failures. > > There seem to be two issues in play here. Firstly the allocation is > going to be vastly over sized in the common case, as we only consume the > buffer based on the num_online_cpus(). Secondly, regardless of size we > should not be requiring allocations greater than PAGE_ALLOC_COSTLY_ORDER > as allocations above this order are significantly more likely to fail. > > The following patch addesses both of these issues. Does that make sense > generally? It seemed to stop top complaining wildly for the reporter > at least.
Hi Stefan, see also https://lkml.org/lkml/2014/5/21/341 and one possible solution: https://lkml.org/lkml/2014/5/30/191 and the other one: https://lkml.org/lkml/2014/6/12/92 https://lkml.org/lkml/2014/6/12/107 Thanks, Heiko -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

