On Thu, Jun 1, 2017 at 1:38 PM, Vladyslav Drok wrote:
> Hello qemu community!
>
> I come from openstack world, and one of our customers complains about an
> issue with huge pages on compute nodes. From the "virsh frepages --all" and
> "cat /proc/meminfo", they see that 4 huge
Hi,
I`ve observed this issue previously on an old 3.10 branch but wrote it
off due to inability to reproduce in any meaningful way. Currently I
am seeing it on 3.10 branch where all KVM-related and RCU-related
issues are patched more or less for well-known issues.
Way to obtain a problematic
> Thanks, I`ve missed this queue back in time!
JFEI: virtqueue_map allows far easier 'unlimited growth' of the wb
cache with stuck storage backend (seems that the writes got acked for
the guest OS but actually floating in cache at the moment). Have no
stable reproducer yet to put it under
On Wed, Feb 17, 2016 at 6:30 PM, Igor Mammedov <imamm...@redhat.com> wrote:
> On Wed, 17 Feb 2016 15:22:29 +0300
> Andrey Korolyov <and...@xdel.ru> wrote:
>
>> Hello Igor, everyone,
>>
>> we are seemingly running into the issue with "virtio: error tryin
Hello Igor, everyone,
we are seemingly running into the issue with "virtio: error trying to
map MMIO memory" on a 'legacy' vhost-net with 64 regions on VMs with
relatively small amount of DIMMs, less than ten of 512Mb and larger
ones for which it could appear literally on every boot. I could
On Tue, Jan 19, 2016 at 10:13 AM, Gerd Hoffmann <kra...@redhat.com> wrote:
> On Di, 2016-01-19 at 02:49 +0300, Andrey Korolyov wrote:
>> On Mon, Jan 18, 2016 at 4:55 PM, Gerd Hoffmann <kra...@redhat.com> wrote:
>> > Hi,
>> >
>> >> > ok. Ha
On Mon, Jan 18, 2016 at 12:38 PM, Gerd Hoffmann <kra...@redhat.com> wrote:
> On Fr, 2016-01-15 at 21:08 +0300, Andrey Korolyov wrote:
>> Just checked, Linux usb driver decided to lose a disk during a
>> 'stress-test' over unpacking linux source instead of triggering a
On Mon, Jan 18, 2016 at 4:55 PM, Gerd Hoffmann wrote:
> Hi,
>
>> > ok. Had no trouble with freebsd, will go fetch netbsd images. What
>> > arch is this? i386? x86_64?
>>
>> i386 7.0 for the reference, but I`m sure that this wouldn`t matter in
>> any way.
>
> 7.0 trace:
Just checked, Linux usb driver decided to lose a disk during a
'stress-test' over unpacking linux source instead of triggering an
assertion in 2.5 (and to irreparably damage its ext4 as well), NetBSD
7.0 reboot action hangs on USB_RESET and NetBSD 5.1 triggers second of
mentioned asserts. Backend
gt; On Sa, 2016-01-09 at 20:34 +0300, Andrey Korolyov wrote:
>> > > > Hello,
>> > > >
>> > > > during regular operations within linux guest with USB EHCI frontend I
>> > > > am seeing process crashes with an assert during regular operati
Hello,
during regular operations within linux guest with USB EHCI frontend I
am seeing process crashes with an assert during regular operations
like dpkg install:
hw/usb/dev-storage.c:334: usb_msd_handle_reset: Assertion `s->req ==
((void *)0)' failed.
This does happen when real block backend
On Wed, Nov 25, 2015 at 1:44 AM, Or Gerlitz wrote:
> Hi,
>
> When doing live migration with iperf running on the migrated VM
> over PV/virtio interface -- it doesn't end when the number of threads > 1 :(
> nor when we run an IO (fio) benchmark which gets high-throughput.
> BTW it seems that I made a little stronger claim than it is actually -
> at least 3.18 works fine in all cases, so the issue was fixed a bit
> earlier than in 4.2.
Ok, it turns out that the fix was brought by
commit 447f05bb488bff4282088259b04f47f0f9f76760
Author: Akinobu Mita
Hi,
during the test against generic storage backend with NBD frontend we
found that the virtio block device is always splitting a single read
range request to 4k ones, bringing the overall performance of the
sequential reads far below virtio-scsi. Random reads are going
relatively well on small
> How about following comment:
> /* Linux guests expect 512/64/128Mb alignment for PAE/x32/x64 arch
> * respectively. Windows works fine with 2Mb. To make sure that
> * memory hotplug would work with above flavors of Linux set
> * minimal alignment to 512Mb (i.e. PAE arch).
> * Enforcing
On Mon, Oct 26, 2015 at 6:37 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
>
> On 26/10/2015 12:50, Andrey Korolyov wrote:
>> Hi,
>>
>> during the test against generic storage backend with NBD frontend we
>> found that the virtio block device is always s
On Mon, Oct 26, 2015 at 7:37 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 26/10/2015 17:31, Andrey Korolyov wrote:
>>> the virtio block device is always splitting a single read
>>> range request to 4k ones, bringing the overall performance of the
>>> se
On Mon, Oct 26, 2015 at 8:03 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
>
> On 26/10/2015 17:43, Andrey Korolyov wrote:
>> On Mon, Oct 26, 2015 at 7:37 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>>> On 26/10/2015 17:31, Andrey Korolyov wrote:
>
On Mon, Oct 26, 2015 at 8:32 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
>
> On 26/10/2015 18:18, Andrey Korolyov wrote:
>> Yes, both cases are positive, thanks for very detailed explanation and
>> for tips. Does this also mean that most current distros which are
&g
On Mon, Sep 28, 2015 at 12:04 PM, Alberto Garcia wrote:
> On Mon 28 Sep 2015 02:18:33 AM CEST, Fam Zheng wrote:
>
>>> > Can this be abused? If I have a guest running in a cloud where the
>>> > cloud provider has put severe throttling limits on me, but lets me
On Tue, Sep 8, 2015 at 12:41 PM, Denis V. Lunev wrote:
> On 09/08/2015 12:33 PM, Paolo Bonzini wrote:
>>
>>
>> On 08/09/2015 10:00, Denis V. Lunev wrote:
>>>
>>> How the given solution works?
>>>
>>> If disk-deadlines option is enabled for a drive, one controls time
>>>
On Tue, Aug 18, 2015 at 5:51 PM, Andrey Korolyov <and...@xdel.ru> wrote:
> "Fixed" with cherry-pick of the
> 7a72f7a140bfd3a5dae73088947010bfdbcf6a40 and its predecessor
> 7103f60de8bed21a0ad5d15d2ad5b7a333dda201. Of course this is not a real
> fix as the only
On Fri, Aug 28, 2015 at 3:31 AM, Josh Durgin jdur...@redhat.com wrote:
On 08/27/2015 09:49 AM, Stefan Hajnoczi wrote:
On Mon, Aug 25, 2014 at 03:50:02PM -0600, Chris Friesen wrote:
The only limit I see in the whole call chain from
virtio_blk_handle_request() on down is the call to
On Thu, May 14, 2015 at 4:42 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Aug 27, 2014 at 9:43 AM, Chris Friesen
chris.frie...@windriver.com wrote:
On 08/25/2014 03:50 PM, Chris Friesen wrote:
I think I might have a glimmering of what's going on. Someone please
correct me if I get
On Thu, Aug 27, 2015 at 2:31 AM, Josh Durgin jdur...@redhat.com wrote:
On 08/26/2015 10:10 AM, Andrey Korolyov wrote:
On Thu, May 14, 2015 at 4:42 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Aug 27, 2014 at 9:43 AM, Chris Friesen
chris.frie...@windriver.com wrote:
On 08/25/2014 03:50
Fixed with cherry-pick of the
7a72f7a140bfd3a5dae73088947010bfdbcf6a40 and its predecessor
7103f60de8bed21a0ad5d15d2ad5b7a333dda201. Of course this is not a real
fix as the only race precondition is shifted/disappeared by a clear
assumption. Though there are not too many hotplug users around, I
On Mon, Aug 3, 2015 at 11:13 AM, Paolo Bonzini pbonz...@redhat.com wrote:
On 03/08/2015 09:47, Andrey Korolyov wrote:
I`ve mistyped lun for tgtd upon volume hotplug, which resulted in an
accidental crash, there is nothing but human factor. Until only LUN0
may possess such unusual properties
On Mon, Aug 3, 2015 at 9:45 AM, Peter Lieven p...@kamp.de wrote:
Am 02.08.2015 um 13:42 schrieb Andrey Korolyov:
Hello,
As we will never pass LUN#0 as a storage lun, it would be better to
prohibit this at least in iscsi.c, otherwise it will result in an FPU
exception and emulator crash
Hello,
As we will never pass LUN#0 as a storage lun, it would be better to
prohibit this at least in iscsi.c, otherwise it will result in an FPU
exception and emulator crash:
traps: qemu-system-x86[32430] trap divide error ip:7f1dab7b5073
sp:7f1d713e4ae0 error:0 in
** Changed in: qemu
Status: New = Fix Released
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1353149
Title:
qemu 2.1.0 fails to start if number of cores is greater than 1.
Status in QEMU:
This means that the issue is fixed elsewhere during rc, I am not
promising to find a commit quickly, but I would elaborate as fast as
possible in a spare time. Apologies again for messing things up a
little.
Unfortunately it looks like that the fix is quantative rather than
qualitative - with
On Wed, Jul 15, 2015 at 7:46 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Jul 15, 2015 at 7:08 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Wed, Jul 15, 2015 at 06:26:03PM +0300, Andrey Korolyov wrote:
On Wed, Jul 15, 2015 at 6:18 PM, Igor Mammedov imamm...@redhat.com wrote:
On Thu
On Wed, Jul 15, 2015 at 7:08 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Wed, Jul 15, 2015 at 06:26:03PM +0300, Andrey Korolyov wrote:
On Wed, Jul 15, 2015 at 6:18 PM, Igor Mammedov imamm...@redhat.com wrote:
On Thu, 9 Jul 2015 20:04:35 +0300
Andrey Korolyov and...@xdel.ru wrote
On Wed, Jul 15, 2015 at 2:20 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
is there a way to query the current cpu model / type of a running qemu
machine?
I mean host, kvm64, qemu64, ...
Stefan
I believe that the most proper one would be
'query-command-line-options'.
On Wed, Jul 15, 2015 at 11:07 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 15.07.2015 um 13:32 schrieb Andrey Korolyov:
On Wed, Jul 15, 2015 at 2:20 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
is there a way to query the current cpu model / type of a running qemu
On Wed, Jul 15, 2015 at 6:18 PM, Igor Mammedov imamm...@redhat.com wrote:
On Thu, 9 Jul 2015 20:04:35 +0300
Andrey Korolyov and...@xdel.ru wrote:
On Wed, Jul 8, 2015 at 6:46 PM, Igor Mammedov imamm...@redhat.com wrote:
On Wed, 8 Jul 2015 13:01:05 +0300
Michael S. Tsirkin m...@redhat.com
On Wed, Jul 8, 2015 at 6:46 PM, Igor Mammedov imamm...@redhat.com wrote:
On Wed, 8 Jul 2015 13:01:05 +0300
Michael S. Tsirkin m...@redhat.com wrote:
[...]
- this fixes qemu on current kernels, so it's a bugfix
- this changes the semantics of memory hot unplug slightly
so I think it's
Radim fixed a bug that was causing me a post migration hang, I'm not
sure if it's the same case though, worth trying the patch here:
Thanks, the issue is fixed with this one. I obviously missed the patch
as -stable 4.0.6 was tagged after almost two weeks from the patch`
appearance and does not
On Tue, Jun 23, 2015 at 3:32 PM, Piotr Rybicki
piotr.rybi...@innervision.pl wrote:
Thanks Piotr, the lack of the host-side memory shrinkage after balloon
deflation is interesting anyway, hopefully you may share guest` dmesg
bits from compiled-in balloon to check visible signs of the issue
It
Hello,
during tests against 4.0.5/4.0.6 for the problem described in
https://lists.gnu.org/archive/html/qemu-devel/2015-06/msg03117.html I
had noticed another weird issue: VM hangs after a couple of minutes
after being migrated if the hypervisor is running mentioned kernel
version. It does not
On Mon, Jun 22, 2015 at 4:30 PM, Piotr Rybicki
piotr.rybi...@innervision.pl wrote:
W dniu 2015-06-19 o 14:01, Andrey Korolyov pisze:
On Fri, Jun 19, 2015 at 2:14 PM, Piotr Rybicki
piotr.rybi...@innervision.pl wrote:
Hello.
Actually it was my mistake.
After some time using memory
On Fri, Jun 19, 2015 at 7:57 PM, Andrey Korolyov and...@xdel.ru wrote:
I don`t think that it could be ACPI-related in any way, instead, it
looks like race in vhost or simular mm-touching mechanism. The
repeated hits you mentioned should be fixed as well indeed, but they
can be barely
On Fri, Jun 19, 2015 at 2:14 PM, Piotr Rybicki
piotr.rybi...@innervision.pl wrote:
Hello.
Actually it was my mistake.
After some time using memory in guest (find /, cp bigfine, etc), res size of
qemu process shrinks to expected value.
Sorry for disturbing.
Now i don't see any memory waste
I don`t think that it could be ACPI-related in any way, instead, it
looks like race in vhost or simular mm-touching mechanism. The
repeated hits you mentioned should be fixed as well indeed, but they
can be barely the reason for this problem.
Please find a trace from a single dimm plugging in
Do You see similar results at Your side?
Best regards
Would you mind to share you argument set to an emulator? As far as I
understood you are using plain ballooning with most results from above
for which those numbers are expected. The case with 5+gig memory
consumption for deflated 1G guest
On Thu, Jun 18, 2015 at 12:21 AM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
2015-06-17 19:26 GMT+03:00 Vasiliy Tolstov v.tols...@selfip.ru:
This is band news =( i have debian wheezy that have old kernel...
Does it possible to get proper results with balloon ? For example by
patching qemu or
On Thu, Jun 18, 2015 at 1:44 AM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
2015-06-18 1:40 GMT+03:00 Andrey Korolyov and...@xdel.ru:
Yes, but I`m afraid that I don`t fully understand why do you need this
when pure hotplug mechanism is available, aside may be nice memory
stats from balloon
On Wed, Jun 17, 2015 at 4:35 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
Hi. I have issue with incorrect memory side inside vm. I'm try utilize
memory balloon (not memory hotplug, because i have guest without
memory hotplug (may be)).
When domain started with static memory all works fine,
I've checked logs, so far I don't see anything suspicious there
except of acpi PNP0C80:00: Already enumerated lines,
probably rising log level might show more info
+ upload full logs
+ enable ACPI debug info to so that dimm device's _CRS would show up
+ QEMU's CLI that was used to produce
On Wed, Jun 17, 2015 at 6:33 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
2015-06-17 17:09 GMT+03:00 Andrey Korolyov and...@xdel.ru:
The rest of visible memory is eaten by reserved kernel areas, for us
this was a main reason to switch to a hotplug a couple of years ago.
You would not be able
Answering back to myself - I made a wrong statement before, the
physical mapping *are* different with different cases, of course!
Therefore, the issue looks much simpler and I`d have a patch over a
couple of days if nobody fix this earlier.
... and another (possibly last) update. This is not
Please find the full cli args and two guest logs for DIMM
initalization attached. As you can see, the freshly populated DIMMs
are probably misplaced in SRAT ('already populated' messages), despite
the fact that the initialized ranges are looking correct at a glance.
When VM is migrated to
On Thu, Jun 11, 2015 at 8:14 PM, Andrey Korolyov and...@xdel.ru wrote:
Hello Igor,
the current hotplug code for dimms effectively prohibiting a
successful migration for VM if memory was added after startup:
- start a VM with certain amount of empty memory slots,
- add some dimms and online
Hello Igor,
the current hotplug code for dimms effectively prohibiting a
successful migration for VM if memory was added after startup:
- start a VM with certain amount of empty memory slots,
- add some dimms and online them in guest (I am transitioning from 2
to 16G with 512Mb DIMMs),
- migrate
'{execute:drive-mirror, arguments: { device:
drive-virtio-disk0, target:
rbd:dev-rack2/vm33090-dest:id=qemukvm:key=xxx:auth_supported=cephx\\;none:mon_host=10.6.0.1\\:6789\\;10.6.0.3\\:6789\\;10.6.0.4\\:6789,
mode: existing, sync: full, detect-zeroes: true, format:
raw } }'
Sorry, forgot to
On Wed, Jun 10, 2015 at 7:04 PM, Alexandre DERUMIER aderum...@odiso.com wrote:
Sorry, forgot to mention - of course I`ve pulled all previous
zeroing-related queue, so I haven`t had only the QMP-related fix
running on top of the master :)
Hi, I had a discussion about rbd mirroring, detect-zeroes
On Mon, Jun 8, 2015 at 10:06 AM, Fam Zheng f...@redhat.com wrote:
The new optional flag defaults to true, in which case, mirror job would
check the read sectors and use sparse write if they are zero. Otherwise
data will be fully copied.
Signed-off-by: Fam Zheng f...@redhat.com
---
On Mon, Jun 8, 2015 at 6:50 PM, Jason Dillaman dilla...@redhat.com wrote:
Hmm ... looking at the latest version of QEMU, it appears that the RBD cache
settings are changed prior to reading the configuration file instead of
overriding the value after the configuration file has been read [1].
On Mon, Jun 8, 2015 at 12:52 PM, Alexey aluka...@alukardd.org wrote:
Hi all!
I suspected poor performance of virtio-scsi driver.
I did a few tests:
Host machine: linux 3.19.1, QEMU emulator version 2.3.0
Guest machine: linux 4.0.4
part of domain xml:
On Mon, Jun 1, 2015 at 6:17 PM, Jason J. Herne
jjhe...@linux.vnet.ibm.com wrote:
Provide a method to throttle guest cpu execution. CPUState is augmented with
timeout controls and throttle start/stop functions. To throttle the guest cpu
the caller simply has to call the throttle start function
On Thu, May 28, 2015 at 3:05 PM, Fam Zheng f...@redhat.com wrote:
On Thu, 05/28 13:19, Paolo Bonzini wrote:
On 28/05/2015 13:11, Fam Zheng wrote:
Whoever uses ioeventfd needs to implement pause/resume, yes---not just
dataplane, also regular virtio-blk/virtio-scsi.
However, everyone
On Thu, May 21, 2015 at 11:35 AM, Wen Congyang we...@cn.fujitsu.com wrote:
On 05/21/2015 12:31 AM, Andrey Korolyov wrote:
On Thu, May 14, 2015 at 6:38 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Wen Congyang (ghost...@gmail.com) wrote:
At 2015/5/14 19:19, Dr. David Alan Gilbert
On Thu, May 14, 2015 at 6:38 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Wen Congyang (ghost...@gmail.com) wrote:
At 2015/5/14 19:19, Dr. David Alan Gilbert Wrote:
One thing I wanted to check I understand; how much RAM do the active and
hidden
disks use; lets say during the 1st
On Wed, Aug 27, 2014 at 9:43 AM, Chris Friesen
chris.frie...@windriver.com wrote:
On 08/25/2014 03:50 PM, Chris Friesen wrote:
I think I might have a glimmering of what's going on. Someone please
correct me if I get something wrong.
I think that VIRTIO_PCI_QUEUE_MAX doesn't really mean
A small update:
the behavior is caused by setting unrestricted_guest feature to N, I
had this feature disabled everywhere from approx. three years ago when
its enablement was one of suspects of the host crashes with
contemporary then KVM module. Also nVMX is likely to not work at all
and produce
On Wed, Apr 1, 2015 at 2:49 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-31 21:23+0300, Andrey Korolyov:
On Tue, Mar 31, 2015 at 9:04 PM, Bandan Das b...@redhat.com wrote:
Bandan Das b...@redhat.com writes:
Andrey Korolyov and...@xdel.ru writes:
...
http://xdel.ru/downloads/kvm
On Wed, Apr 1, 2015 at 4:19 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On 01/04/2015 14:26, Andrey Korolyov wrote:
Yes, I disabled host watchdog during runtime. Indeed guest-induced NMI
would look different and they had no reasons to be fired at this stage
inside guest. I`d suspect
On Wed, Apr 1, 2015 at 6:37 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Apr 1, 2015 at 4:19 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On 01/04/2015 14:26, Andrey Korolyov wrote:
Yes, I disabled host watchdog during runtime. Indeed guest-induced NMI
would look different and they had
*putting my tinfoil hat on*
After thinking a little bit more, the observable behavior is a quite
good match for a bios-level hypervisor (hardware trojan in a modern
terminology), as it likely is sensitive to timing[1], does not appear
more than once per VM during boot cycle and seemingly does not
On Tue, Mar 31, 2015 at 4:45 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-30 22:32+0300, Andrey Korolyov:
On Mon, Mar 30, 2015 at 9:56 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-27 13:16+0300, Andrey Korolyov:
On Fri, Mar 27, 2015 at 12:03 AM, Bandan Das b...@redhat.com wrote
On Tue, Mar 31, 2015 at 7:45 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-31 17:56+0300, Andrey Korolyov:
Chasing the culprit this way could take a long time, so a new tracepoint
that shows if 0xef is set on entry would let us guess the bug faster ...
Please provide a failing trace
On Tue, Mar 31, 2015 at 9:04 PM, Bandan Das b...@redhat.com wrote:
Bandan Das b...@redhat.com writes:
Andrey Korolyov and...@xdel.ru writes:
...
http://xdel.ru/downloads/kvm-e5v2-issue/another-tracepoint-fail-with-apicv.dat.gz
Something a bit more interesting, but the mess is happening just
On Mon, Mar 30, 2015 at 9:56 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-27 13:16+0300, Andrey Korolyov:
On Fri, Mar 27, 2015 at 12:03 AM, Bandan Das b...@redhat.com wrote:
Radim Krčmář rkrc...@redhat.com writes:
I second Bandan -- checking that it reproduces on other machine would
On Fri, Mar 27, 2015 at 12:03 AM, Bandan Das b...@redhat.com wrote:
Radim Krčmář rkrc...@redhat.com writes:
2015-03-26 21:24+0300, Andrey Korolyov:
On Thu, Mar 26, 2015 at 8:40 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-26 20:08+0300, Andrey Korolyov:
KVM internal error. Suberror
On Thu, Mar 26, 2015 at 11:40 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-26 21:24+0300, Andrey Korolyov:
On Thu, Mar 26, 2015 at 8:40 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-26 20:08+0300, Andrey Korolyov:
KVM internal error. Suberror: 2
extra data[0]: 80ef
extra
On Thu, Mar 26, 2015 at 5:47 AM, Bandan Das b...@redhat.com wrote:
Hi Andrey,
Andrey Korolyov and...@xdel.ru writes:
On Mon, Mar 16, 2015 at 10:17 PM, Andrey Korolyov and...@xdel.ru wrote:
For now, it looks like bug have a mixed Murphy-Heisenberg nature, as
it appearance is very rare
On Thu, Mar 26, 2015 at 8:06 PM, Kevin O'Connor ke...@koconnor.net wrote:
On Thu, Mar 26, 2015 at 07:48:09PM +0300, Andrey Korolyov wrote:
On Thu, Mar 26, 2015 at 7:36 PM, Kevin O'Connor ke...@koconnor.net wrote:
I'm not sure if the crash always happens at the int $0x19 location
though
On Thu, Mar 26, 2015 at 7:36 PM, Kevin O'Connor ke...@koconnor.net wrote:
On Thu, Mar 26, 2015 at 04:58:07PM +0100, Radim Krčmář wrote:
2015-03-25 20:05-0400, Kevin O'Connor:
On Thu, Mar 26, 2015 at 02:35:58AM +0300, Andrey Korolyov wrote:
Thanks, strangely the reboot is always failing now
On Thu, Mar 26, 2015 at 8:18 PM, Kevin O'Connor ke...@koconnor.net wrote:
On Thu, Mar 26, 2015 at 08:08:52PM +0300, Andrey Korolyov wrote:
On Thu, Mar 26, 2015 at 8:06 PM, Kevin O'Connor ke...@koconnor.net wrote:
On Thu, Mar 26, 2015 at 07:48:09PM +0300, Andrey Korolyov wrote:
On Thu, Mar 26
On Thu, Mar 26, 2015 at 8:40 PM, Radim Krčmář rkrc...@redhat.com wrote:
2015-03-26 20:08+0300, Andrey Korolyov:
KVM internal error. Suberror: 2
extra data[0]: 80ef
extra data[1]: 8b0d
Btw. does this part ever change?
I see that first report had:
KVM internal error. Suberror: 2
On Thu, Mar 26, 2015 at 12:18 PM, Andrey Korolyov and...@xdel.ru wrote:
On Thu, Mar 26, 2015 at 5:47 AM, Bandan Das b...@redhat.com wrote:
Hi Andrey,
Andrey Korolyov and...@xdel.ru writes:
On Mon, Mar 16, 2015 at 10:17 PM, Andrey Korolyov and...@xdel.ru wrote:
For now, it looks like bug
On Mon, Mar 16, 2015 at 10:17 PM, Andrey Korolyov and...@xdel.ru wrote:
For now, it looks like bug have a mixed Murphy-Heisenberg nature, as
it appearance is very rare (compared to the number of actual launches)
and most probably bounded to the physical characteristics of my
production nodes
- attach serial console (I am using virsh list for this exact purpose),
virsh console of course, sorry
On Wed, Mar 25, 2015 at 11:54 PM, Kevin O'Connor ke...@koconnor.net wrote:
On Wed, Mar 25, 2015 at 11:43:31PM +0300, Andrey Korolyov wrote:
On Mon, Mar 16, 2015 at 10:17 PM, Andrey Korolyov and...@xdel.ru wrote:
For now, it looks like bug have a mixed Murphy-Heisenberg nature
On Thu, Mar 26, 2015 at 2:02 AM, Kevin O'Connor ke...@koconnor.net wrote:
On Thu, Mar 26, 2015 at 01:31:11AM +0300, Andrey Korolyov wrote:
On Wed, Mar 25, 2015 at 11:54 PM, Kevin O'Connor ke...@koconnor.net wrote:
Can you add something like:
-chardev file,path=seabioslog.`date +%s`,id
On Wed, Mar 18, 2015 at 8:36 PM, Mohammed Gamal
mohammed.ga...@profitbricks.com wrote:
Hi,
I've been sporadically getting my KVM virtual machines crashing with this
message while they're booting
KVM internal error. Suberror: 1
emulation failure
EAX= EBX= ECX=
For now, it looks like bug have a mixed Murphy-Heisenberg nature, as
it appearance is very rare (compared to the number of actual launches)
and most probably bounded to the physical characteristics of my
production nodes. As soon as I reach any reproducible path for a
regular workstation
On Thu, Mar 12, 2015 at 12:59 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Andrey Korolyov (and...@xdel.ru) wrote:
On Wed, Mar 11, 2015 at 10:59 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Andrey Korolyov (and...@xdel.ru) wrote:
On Wed, Mar 11, 2015 at 10:33 PM, Dr
On Wed, Mar 11, 2015 at 10:33 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Kevin O'Connor (ke...@koconnor.net) wrote:
On Wed, Mar 11, 2015 at 02:45:31PM -0400, Kevin O'Connor wrote:
On Wed, Mar 11, 2015 at 02:40:39PM -0400, Kevin O'Connor wrote:
For what it's worth, I can't seem
On Wed, Mar 11, 2015 at 10:59 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Andrey Korolyov (and...@xdel.ru) wrote:
On Wed, Mar 11, 2015 at 10:33 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Kevin O'Connor (ke...@koconnor.net) wrote:
On Wed, Mar 11, 2015 at 02:45:31PM
On Tue, Mar 10, 2015 at 7:57 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Andrey Korolyov (and...@xdel.ru) wrote:
On Sat, Mar 7, 2015 at 3:00 AM, Andrey Korolyov and...@xdel.ru wrote:
On Fri, Mar 6, 2015 at 7:57 PM, Bandan Das b...@redhat.com wrote:
Andrey Korolyov and...@xdel.ru
On Sat, Mar 7, 2015 at 3:00 AM, Andrey Korolyov and...@xdel.ru wrote:
On Fri, Mar 6, 2015 at 7:57 PM, Bandan Das b...@redhat.com wrote:
Andrey Korolyov and...@xdel.ru writes:
On Fri, Mar 6, 2015 at 1:14 AM, Andrey Korolyov and...@xdel.ru wrote:
Hello,
recently I`ve got a couple of shiny new
On Tue, Mar 10, 2015 at 9:16 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Andrey Korolyov (and...@xdel.ru) wrote:
On Tue, Mar 10, 2015 at 7:57 PM, Dr. David Alan Gilbert
dgilb...@redhat.com wrote:
* Andrey Korolyov (and...@xdel.ru) wrote:
On Sat, Mar 7, 2015 at 3:00 AM, Andrey
On Fri, Mar 6, 2015 at 7:57 PM, Bandan Das b...@redhat.com wrote:
Andrey Korolyov and...@xdel.ru writes:
On Fri, Mar 6, 2015 at 1:14 AM, Andrey Korolyov and...@xdel.ru wrote:
Hello,
recently I`ve got a couple of shiny new Intel 2620v2s for future
replacement of the E5-2620v1, but I
Hello,
recently I`ve got a couple of shiny new Intel 2620v2s for future
replacement of the E5-2620v1, but I experienced relatively many events
with emulation errors, all traces looks simular to the one below. I am
running qemu-2.1 on x86 on top of 3.10 branch for testing purposes but
can switch
On Fri, Mar 6, 2015 at 1:14 AM, Andrey Korolyov and...@xdel.ru wrote:
Hello,
recently I`ve got a couple of shiny new Intel 2620v2s for future
replacement of the E5-2620v1, but I experienced relatively many events
with emulation errors, all traces looks simular to the one below. I am
running
On Mon, Feb 16, 2015 at 5:47 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
Am 16.02.2015 um 15:44 schrieb Paolo Bonzini:
On 16/02/2015 15:43, Stefan Priebe - Profihost AG wrote:
Hi,
Am 16.02.2015 um 13:24 schrieb Paolo Bonzini:
On 15/02/2015 19:46, Stefan Priebe
On Tue, Jan 20, 2015 at 10:06 PM, Alexandre DERUMIER
aderum...@odiso.com wrote:
Hi,
I have tried with numa enabled, and it's still don't work.
Can you send me your vm qemu command line ?
Also, with numa I have notice something strange with info numa command.
starting with -smp
BTW both 2008r2 and 2012r2 are supporting this. 2008r2 is kind enough
to tell me that the cpu count is changed and I should relaunch task
manager.
Hello,
there is an issue which is not a bug itself (as anyone who plays with
Windows should be advised to use hypervclock timers), but it can
indicate some issue with interrupt handling.
Assume two launch strings (attached), and execution of
'{execute:cpu-add,arguments:{id:1}}'
the regular
1 - 100 of 197 matches
Mail list logo