[Kernel-packages] [Bug 1768115] Comment bridged from LTC Bugzilla

2018-05-09 Thread bugproxy
--- Comment From mdr...@us.ibm.com 2018-05-09 08:47 EDT--- (In reply to comment #26) > Is it essential to have two NUMA nodes for the guest memory to see this bug? > Can we reproduce it without the NUMA node stuff in the xml? I haven't attempted it on my end. Can give it a try. But we

[Kernel-packages] [Bug 1768115] Comment bridged from LTC Bugzilla

2018-05-08 Thread bugproxy
--- Comment From p...@au1.ibm.com 2018-05-09 00:25 EDT--- Is it essential to have two NUMA nodes for the guest memory to see this bug? Can we reproduce it without the NUMA node stuff in the xml? -- You received this bug notification because you are a member of Kernel Packages, which is

[Kernel-packages] [Bug 1768115] Comment bridged from LTC Bugzilla

2018-05-08 Thread bugproxy
--- Comment From mdr...@us.ibm.com 2018-05-08 16:37 EDT--- Hit another instance of the RAM inconsistencies prior to resuming guest on target side (this one is migrating from boslcp6 to boslcp5 and crashing after it resumes execution on boslcp5). The signature is eerily similar to the

[Kernel-packages] [Bug 1768115] Comment bridged from LTC Bugzilla

2018-05-07 Thread bugproxy
--- Comment From mdr...@us.ibm.com 2018-05-07 14:48 EDT--- The RCU connection is possibly a red herring. I tested the above theory about RCU timeouts/warning being a trigger by modifying QEMU to allow guest timebase to be advanced artificially to trigger RCU timeouts/warnings in rapid

[Kernel-packages] [Bug 1768115] Comment bridged from LTC Bugzilla

2018-05-04 Thread bugproxy
--- Comment From mdr...@us.ibm.com 2018-05-04 09:09 EDT--- (In reply to comment #15) > This is not the same as the original bug, but I suspect they are part of a > class of issues we're hitting while running under very particular > circumstances which might not generally be seen during

[Kernel-packages] [Bug 1768115] Comment bridged from LTC Bugzilla

2018-04-30 Thread bugproxy
--- Comment From dougm...@us.ibm.com 2018-04-30 15:31 EDT--- Both logs show that the dmesg buffer has been overrun, so by the time you get to xmon and run "dl" you've lost the messages that show what happened before things went wrong. You will need to be collecting console output from