Hi,
Federico Gimenez wrote (08 Jan 2014 14:05:01 GMT) :
3.13.0-rc7 apparently fixed the issue. It may have been fixed on
3.12.6 but I didn't test that one.
This, added to the fact that Ben tagged this fixed-upstream on
January 10, leads me to think that this RC bug could be closed.
I would do
Your message dated Sun, 09 Mar 2014 00:38:16 +
with message-id 1394325496.2861.61.ca...@deadeye.wl.decadent.org.uk
and subject line Re: Bug#734172:
has caused the Debian Bug report #734172,
regarding linux-image-3.11-2-686-pae: unable to handle kernel paging request at
c114873. IP: [c1149fc4
3.13.0-rc7 apparently fixed the issue. It may have been fixed on
3.12.6 but I didn't test that one.
I've just tested my backup script which puts some load on a couple of
GFS2 filesystems. Last week this would have caused an almost instant
crash but this time it finished without issues.
--
To
I didn't say it was the same, just that is has some similarities.
Both have GFS2 on top of active/active drbd and both break when there
is some load in the GFS2 filesystem (my first crash a few weeks ago
was with a rm -rf)
I agree they affect different functions but they still have some
things in
disabling send_page didn't work.
something a simple as a tar from one gfs2 to another made it crash again.
:-(
Looks somewhat similar to
https://bugzilla.redhat.com/show_bug.cgi?id=1023431
On Mon, 2014-01-06 at 16:08 -0200, Federico Gimenez wrote:
Looks somewhat similar to
https://bugzilla.redhat.com/show_bug.cgi?id=1023431
I don't think so, that's a BUG versus an Oops and not in the same
function.
Ben.
--
Ben Hutchings
Any smoothly functioning technology is indistinguishable
I'm going to test setting this parameter (although I don't use Xen):
http://www.drbd.org/users-guide/s-xen-drbd-mod-params.html
Googling around there seems to be a relation between Oops, drbd
active/active and this setting.
DRBD source code says this which sounds like it could be related:
Package: src:linux
Version: 3.11.10-1
Severity: critical
Justification: breaks the whole system
Dear Maintainer,
* What led up to the situation?
Not sure what triggers this. The machines are a 2 node cluster with DRBD
and GFS2 controlled by Pacemaker+corosync
* What exactly did you
9 matches
Mail list logo