On Thu 14-02-13 12:39:26, Andrew Morton wrote:
> On Thu, 14 Feb 2013 12:03:49 +0000
> Mel Gorman <mgor...@suse.de> wrote:
> 
> > Rob van der Heij reported the following (paraphrased) on private mail.
> > 
> >     The scenario is that I want to avoid backups to fill up the page
> >     cache and purge stuff that is more likely to be used again (this is
> >     with s390x Linux on z/VM, so I don't give it as much memory that
> >     we don't care anymore). So I have something with LD_PRELOAD that
> >     intercepts the close() call (from tar, in this case) and issues
> >     a posix_fadvise() just before closing the file.
> > 
> >     This mostly works, except for small files (less than 14 pages)
> >     that remains in page cache after the face.
> 
> Sigh.  We've had the "my backups swamp pagecache" thing for 15 years
> and it's still happening.
> 
> It should be possible nowadays to toss your backup application into a
> container to constrain its pagecache usage.  So we can type
> 
>       run-in-a-memcg -m 200MB /my/backup/program
> 
> and voila.  Does such a script exist and work?

The script would be as simple as:
cgcreate -g memory:backups/`whoami`
cgset -r memory.limit_in_bytes=200MB backups/`whoami`
cgexec -g memory:backups/`whoami` /my/backup/program

It just expects that admin sets up backups group which allows the user
to create a subgroup (w permission on the directory) and probably set up
some reasonable cap for all backups
[...]
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to