On 09/15/2015 06:22 AM, Sergey Senozhatsky wrote:
On (09/15/15 00:08), Dan Streetman wrote: [..]correct. a bit of internals: we don't scan all the zspages every time. each class has stats for allocated used objects, allocated used objects, etc. so we 'compact' only classes that can be compacted: static unsigned long zs_can_compact(struct size_class *class) { unsigned long obj_wasted; obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) - zs_stat_get(class, OBJ_USED); obj_wasted /= get_maxobj_per_zspage(class->size, class->pages_per_zspage); return obj_wasted * class->pages_per_zspage; } if we can free any zspages (which is at least one page), then we attempt to do so. is compaction the root cause of the symptoms Vitaly observe?
He mentioned the "compact_stalls" counter which in /proc/vmstat is for the traditional physical memory compaction, not the zsmalloc-specific one. Which would imply high-order allocations. Does zsmalloc try them first before falling back to the order-0 zspages linked together manually?
-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

