On 27/08/13 17:30, Dave Jones wrote:
Seems to do the trick.
We are running many virtualization hosts with Linux 3.11.3, qemu 1.6.1 +
kvm and ksm. The hosts have 128GB RAM, 10GB swap and 24x AMD Opteron
6238 cores.
Several times few weeks ago, we have seen the OOM killer come to life
and
On 29/05/14 07:37, Marian Marinov wrote:
Hello,
I have the following proposition.
Number of currently running processes is accounted at the root user
namespace. The problem I'm facing is that multiple
containers in different user namespaces share the process counters.
So if containerX
On 21/05/14 09:02, Dominique Martinet wrote:
v9fs_fid_xattr_set is supposed to return 0 on success.
This corrects the behaviour introduced in commit
bdd5c28dcb8330b9074404cc92a0b83aae5606a
9p: fix return value in case in v9fs_fid_xattr_set()
(The function returns a negative error on
Hello,
When using parity md raid backed up by faster SSD disks, with btrfs on
top of it, at intensive I/O, the machine enters a sort of deadlock and
the load average starts to grow until a point where the machine is no
longer responsive.
At the time when the deadlock happens, there are 2
On 27/08/13 17:30, Dave Jones wrote:
Seems to do the trick.
We are running many virtualization hosts with Linux 3.11.3, qemu 1.6.1 +
kvm and ksm. The hosts have 128GB RAM, 10GB swap and 24x AMD Opteron
6238 cores.
Several times few weeks ago, we have seen the OOM killer come to life
and
On 29/05/14 07:37, Marian Marinov wrote:
> Hello,
>
> I have the following proposition.
>
> Number of currently running processes is accounted at the root user
> namespace. The problem I'm facing is that multiple
> containers in different user namespaces share the process counters.
>
> So if
Hello,
When using parity md raid backed up by faster SSD disks, with btrfs on
top of it, at intensive I/O, the machine enters a sort of deadlock and
the load average starts to grow until a point where the machine is no
longer responsive.
At the time when the deadlock happens, there are 2
On 21/05/14 09:02, Dominique Martinet wrote:
> v9fs_fid_xattr_set is supposed to return 0 on success.
>
> This corrects the behaviour introduced in commit
> bdd5c28dcb8330b9074404cc92a0b83aae5606a
> "9p: fix return value in case in v9fs_fid_xattr_set()"
>
> (The function returns a negative error
8 matches
Mail list logo