On Tue, 06 Aug 2013 07:24:26 +0200
Arne Jansen <[email protected]> wrote:

> On 05.08.2013 18:35, Tomasz Chmielewski wrote:
> >> I am trying to use qgroups
> >> functionality & with a basic random-write workload, it constantly
> >> keeps leaking memory & within few minutes of IO, there is either
> >> out-of-memory killer trying to kill some tasks or there are
> >> page-allocation failures that btrfs or other kernel module
> >> experiences.
> > 
> > FYI, I just saw something similar with 3.10 on a server with 32 GB
> > RAM:
> > 
> > The result was a frozen server and a need to hard reset.
> > 
> 
> What do I have to do to reproduce it here? How do you generate the
> load? What is the disk setup, what the qgroups setup?

Unfortunately I don't have a way to reproduce, as the issue
happened to me only once.

Server uptime was about 2 weeks, and there were millions of files with
multiple hardlinks on disk (backuppc archive copied from ext4 to
btrfs). Then, "rm" command was running for many days, about 5 times in
parallel, and many other rsync commands, with snapshots being added and
removed.

When I saw the issue I looked up in btrfs list archives that there was
a similar report in the past - but I doubt I will be able to find a
reliable way to reproduce this.

-- 
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to