Hello,
i'm interested if it is possible to livepatch the guest kernel without
reboot.
(host kernel and microcode is patched)
Greets,
Stefan
Am 15.05.19 um 19:54 schrieb Daniel P. Berrangé:
> On Wed, May 15, 2019 at 07:13:56PM +0200, Stefan Priebe - Profihost AG wrote:
>> Hello list,
>>
>> i've updated my host to kernel 4.19.43 and applied the following patch
>> to my qemu 2.12.1:
>> https://bugzilla.
Hello list,
i've updated my host to kernel 4.19.43 and applied the following patch
to my qemu 2.12.1:
https://bugzilla.suse.com/attachment.cgi?id=798722
But my guest running 4.19.43 still says:
Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state
unknown
while the host says:
Am 17.09.2018 um 11:40 schrieb Jack Wang:
> Stefan Priebe - Profihost AG 于2018年9月17日周一 上午9:00写道:
>>
>> Hi,
>>
>> Am 17.09.2018 um 08:38 schrieb Jack Wang:
>>> Stefan Priebe - Profihost AG 于2018年9月16日周日 下午3:31写道:
>>>>
>>>> Hello,
>
May be amissing piece:
vm.overcommit_memory=0
Greets,
Stefan
Am 17.09.2018 um 09:00 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> Am 17.09.2018 um 08:38 schrieb Jack Wang:
>> Stefan Priebe - Profihost AG 于2018年9月16日周日 下午3:31写道:
>>>
>>> Hello,
>>>
Hi,
Am 17.09.2018 um 08:38 schrieb Jack Wang:
> Stefan Priebe - Profihost AG 于2018年9月16日周日 下午3:31写道:
>>
>> Hello,
>>
>> while overcommiting cpu I had several situations where all vms gone offline
>> while two vms saturated all cores.
>>
>> I beli
Hello,
while overcommiting cpu I had several situations where all vms gone offline
while two vms saturated all cores.
I believed all vms would stay online but would just not be able to use all
their cores?
My original idea was to automate live migration on high host load to move vms
to
Am 17.08.2018 um 11:41 schrieb Daniel P. Berrangé:
> On Fri, Aug 17, 2018 at 08:44:38AM +0200, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> i haven't found anything on the web regarding qemu and mentioned variants.
>>
>> While my host says:
>> l
Hello,
i haven't found anything on the web regarding qemu and mentioned variants.
While my host says:
l1tf:Mitigation: PTE Inversion; VMX: SMT vulnerable, L1D conditional
cache flushes
meltdown:Mitigation: PTI
spec_store_bypass:Mitigation: Speculative Store Bypass disabled via
prctl and seccomp
Am 08.01.2018 um 23:07 schrieb Eric Blake:
> On 01/08/2018 02:03 PM, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> for meltdown mitigation and performance it's important to have the pcid
>> flag passed down to the guest (f.e.
>> https://groups.google.com/fo
Hello,
for meltdown mitigation and performance it's important to have the pcid
flag passed down to the guest (f.e.
https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU).
My host shows the flag:
# grep ' pcid ' /proc/cpuinfo | wc -l
56
But the guest does not:
# grep pcid
Hello,
for meltdown mitigation and performance it's important to have the pcid
flag passed down to the guest (f.e.
https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU).
My host shows the flag:
# grep ' pcid ' /proc/cpuinfo | wc -l
56
But the guest does not:
# grep pcid
;pbonz...@redhat.com>:
>
>> On 04/01/2018 21:15, Stefan Priebe - Profihost AG wrote:
>> attached the relevant patch for everybody who needs it.
>
> This is the original patch from Intel, which doesn't work unless you
> have a patched kernel (which you almost certainly don't
t;> 1.) intel / amd cpu microcode update
>> 2.) qemu update to pass the new MSR and CPU flags from the microcode update
>> 3.) host kernel update
>> 4.) guest kernel update
>>
>> The microcode update and the kernel update is publicly available but i'm
>>
Nobody? Is this something they did on their own?
Stefan
Am 04.01.2018 um 07:27 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> i've seen some vendors have updated qemu regarding meltdown / spectre.
>
> f.e.:
>
> CVE-2017-5715: QEMU was updated to allow pa
st to complete the mitigation on all layers.
>
> patching the guest kernel, to avoid that a process from the vm have access to
> memory of another process of same vm.
Yes.
Stefan
>
>
>
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri..
ate and the kernel update is publicly available but i'm
missing the qemu one.
Greets,
Stefan
> - Mail original -
> De: "aderumier" <aderum...@odiso.com>
> À: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> Cc: "qemu-devel" <
Hello,
i've seen some vendors have updated qemu regarding meltdown / spectre.
f.e.:
CVE-2017-5715: QEMU was updated to allow passing through new MSR and
CPUID flags from the host VM to the CPU, to allow enabling/disabling
branch prediction features in the Intel CPU. (bsc#1068032)
ugh vcpu to handle all the queues ?
Yes.
Stefan
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "qemu-devel" <qemu-devel@nongnu.org>
> Envoyé: Mardi 2 Janvier 2018 12:17:29
> Objet: [Qemu-devel] dropped pk
Am 03.01.2018 um 04:57 schrieb Wei Xu:
> On Tue, Jan 02, 2018 at 10:17:25PM +0100, Stefan Priebe - Profihost AG wrote:
>>
>> Am 02.01.2018 um 18:04 schrieb Wei Xu:
>>> On Tue, Jan 02, 2018 at 04:24:33PM +0100, Stefan Priebe - Profihost AG
>>> wrote:
>>>
Am 02.01.2018 um 18:04 schrieb Wei Xu:
> On Tue, Jan 02, 2018 at 04:24:33PM +0100, Stefan Priebe - Profihost AG wrote:
>> Hi,
>> Am 02.01.2018 um 15:20 schrieb Wei Xu:
>>> On Tue, Jan 02, 2018 at 12:17:29PM +0100, Stefan Priebe - Profihost AG
>>> wrote:
>>&
Hi,
Am 02.01.2018 um 15:20 schrieb Wei Xu:
> On Tue, Jan 02, 2018 at 12:17:29PM +0100, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> currently i'm trying to fix a problem where we have "random" missing
>> packets.
>>
>> We're doing an ssh co
Hello,
currently i'm trying to fix a problem where we have "random" missing
packets.
We're doing an ssh connect from machine a to machine b every 5 minutes
via rsync and ssh.
Sometimes it happens that we get this cron message:
"Connection to 192.168.0.2 closed by remote host.
rsync: connection
Hello,
Am 22.11.2017 um 20:41 schrieb Dr. David Alan Gilbert:
> * Paolo Bonzini (pbonz...@redhat.com) wrote:
>> On 06/11/2017 12:09, Stefan Priebe - Profihost AG wrote:
>>> HI Paolo,
>>>
>>> could this patchset be related?
>>
>> Uh oh, yes it should
Hello,
while using qemu 2.9.1 and doing a backup of a disk:
I have sometimes the following output:
Formatting '/mnt/qemu-249-2017_11_19-04_00_05.qcow2', fmt=qcow2
size=236223201280 encryption=off cluster_size=65536 lazy_refcounts=off
refcount_bits=16
followed by:
kvm: Failed to flush the L2
Hello,
Am 10.11.2017 um 05:18 schrieb Jason Wang:
>
>
> On 2017年11月08日 19:22, Jason Wang wrote:
>>
>>
>> On 2017年11月08日 18:46, Paolo Bonzini wrote:
>>> On 08/11/2017 09:21, Jason Wang wrote:
>>>>
>>>> On 2017年11月08日 17:05, Stefan Prieb
Am 08.11.2017 um 08:54 schrieb Jason Wang:
>
>
> On 2017年11月08日 15:41, Stefan Priebe - Profihost AG wrote:
>> Hi Paolo,
>>
>> Am 06.11.2017 um 12:27 schrieb Paolo Bonzini:
>>> On 06/11/2017 12:09, Stefan Priebe - Profihost AG wrote:
>>>> HI
Hi Paolo,
Am 06.11.2017 um 12:27 schrieb Paolo Bonzini:
> On 06/11/2017 12:09, Stefan Priebe - Profihost AG wrote:
>> HI Paolo,
>>
>> could this patchset be related?
>
> Uh oh, yes it should. Jason, any ways to fix it? I suppose we need to
> disable UFO in the n
HI Paolo,
could this patchset be related?
Greets,
Stefan
Am 06.11.2017 um 10:52 schrieb Stefan Priebe - Profihost AG:
> Hi Paolo,
>
> Am 06.11.2017 um 10:49 schrieb Paolo Bonzini:
>> On 06/11/2017 10:48, Stefan Priebe - Profihost AG wrote:
>>> Hi Paolo,
>>>
&
Hi Paolo,
Am 06.11.2017 um 10:49 schrieb Paolo Bonzini:
> On 06/11/2017 10:48, Stefan Priebe - Profihost AG wrote:
>> Hi Paolo,
>>
>> Am 06.11.2017 um 10:40 schrieb Paolo Bonzini:
>>> On 06/11/2017 10:38, Stefan Priebe - Profihost AG wrote:
>>>> Hello,
Hi Paolo,
Am 06.11.2017 um 10:40 schrieb Paolo Bonzini:
> On 06/11/2017 10:38, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> i've upgraded some servers from kernel 4.4 to 4.12 - both running Qemu
>> 2.9.1.
>>
>> If i migrate a VM from a host running
Hello,
i've upgraded some servers from kernel 4.4 to 4.12 - both running Qemu
2.9.1.
If i migrate a VM from a host running kernel 4.4 to a host running 4.12
i get:
kvm: virtio-net: saved image requires TUN_F_UFO support
kvm: Failed to load virtio-net-device:tmp
kvm: Failed to load
Hello Stefan,
Am 30.08.2017 um 19:17 schrieb Stefan Hajnoczi:
> On Fri, Aug 18, 2017 at 04:40:36PM +0200, Stefan Priebe - Profihost AG wrote:
>> i've a problem with two VMs running on the SAME host machine using qemu
>> 2.7.1 or 2.9.0 and vhost_net + virtio.
>>
>>
Hello,
does nobody have an idea?
Greets,
Stefam
Am 18.08.2017 um 16:40 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> i've a problem with two VMs running on the SAME host machine using qemu
> 2.7.1 or 2.9.0 and vhost_net + virtio.
>
> Sometimes TCP packets going from ma
Hello,
i've a problem with two VMs running on the SAME host machine using qemu
2.7.1 or 2.9.0 and vhost_net + virtio.
Sometimes TCP packets going from machine a to machine b are simply lost.
I see them in VM A using tcpdump going out but they never come in on
machine B. Both machines have
Am 19.12.2016 um 12:03 schrieb Stefan Hajnoczi:
> On Fri, Dec 16, 2016 at 10:00:36PM +0100, Stefan Priebe - Profihost AG wrote:
>>
>> Am 15.12.2016 um 07:46 schrieb Alexandre DERUMIER:
>>> does rollbacking the kernel to previous version fix the problem ?
>>
>>
ing another
profile like throughput-performance everything is fine again.
Geets,
Stefan
>
> i'm not sure if "perf" could give you some hints
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "aderumier"
maybe it could be related to vhost-net module too.
>
>
> - Mail original -----
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "qemu-devel" <qemu-devel@nongnu.org>
> Envoyé: Mercredi 14 Décembre 2016 16:04:08
> Objet: [
Hello,
after upgrading a cluster OS, Qemu, ... i'm experiencing slow and
volatile network speeds inside my VMs.
Currently I've no idea what causes this but it's related to the host
upgrades. Before i was running Qemu 2.6.2.
I'm using virtio for the network cards.
Greets,
Stefan
Am 15.11.2016 um 12:07 schrieb Ladi Prosek:
> Hi,
>
> On Tue, Nov 15, 2016 at 11:37 AM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>> Hello,
>>
>> Am 15.11.2016 um 11:30 schrieb Dr. David Alan Gilbert:
>>> * Stefan Priebe - Profihost
Hello,
Am 15.11.2016 um 11:30 schrieb Dr. David Alan Gilbert:
> * Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
>> Hello,
>>
>> today i did a first live migration from Qemu 2.6.2 to Qemu 2.7.0. The VM
>> is running windows and virtio-balloon and with
Hello,
today i did a first live migration from Qemu 2.6.2 to Qemu 2.7.0. The VM
is running windows and virtio-balloon and with machine type pc-i440fx-2.5.
The output of the target qemu process was:
kvm_apic_post_load: Yeh
kvm_apic_post_load: Yeh
kvm_apic_post_load: Yeh
kvm_apic_post_load: Yeh
Am 07.06.2016 um 09:37 schrieb Peter Lieven:
> commit fefe2a78 accidently dropped the code path for injecting
> raw packets. This feature is needed for sending gratuitous ARPs
> after an incoming migration has completed. The result is increased
> network downtime for vservers where the network
Am 07.06.2016 um 09:38 schrieb Peter Lieven:
> Am 06.06.2016 um 18:13 schrieb Stefan Priebe - Profihost AG:
>> We're most probably seeing the same while migrating a machine running
>> balanceng but haven't thought this might be a qemu bug. Instead we're
>> investigating
We're most probably seeing the same while migrating a machine running balanceng
but haven't thought this might be a qemu bug. Instead we're investigating with
balanceng people.
Waiting for your further results.
Greets,
Stefan
Excuse my typo sent from my mobile phone.
> Am 06.06.2016 um 17:51
Am 25.02.2016 um 20:53 schrieb John Snow:
On 02/25/2016 02:49 AM, Stefan Priebe - Profihost AG wrote:
Am 22.02.2016 um 23:08 schrieb John Snow:
On 02/22/2016 03:21 PM, Stefan Priebe wrote:
Hello,
is there any chance or hack to work with a bigger cluster size for the
drive backup job
Am 22.02.2016 um 23:08 schrieb John Snow:
>
>
> On 02/22/2016 03:21 PM, Stefan Priebe wrote:
>> Hello,
>>
>> is there any chance or hack to work with a bigger cluster size for the
>> drive backup job?
>>
>> See:
>> http:/
Hello,
is there any chance or hack to work with a bigger cluster size for the
drive backup job?
See:
http://git.qemu.org/?p=qemu.git;a=blob;f=block/backup.c;h=16105d40b193be9bb40346027bdf58e62b956a96;hb=98d2c6f2cd80afaa2dc10091f5e35a97c181e4f5
This is very slow with ceph - may be due to the
Am 22.02.2016 um 18:36 schrieb Paolo Bonzini:
On 20/02/2016 11:44, Stefan Priebe wrote:
Hi,
while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines.
I got those traces and a load of 500 on those system. I was only abler
to recover by sysrq-trigger.
It seems like something
Hi,
while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines.
I got those traces and a load of 500 on those system. I was only abler
to recover by sysrq-trigger.
All traces:
INFO: task pvedaemon worke:7470 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 >
Am 26.01.2016 um 11:13 schrieb Yang Zhang:
> On 2016/1/26 15:22, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>> Am 26.01.2016 um 02:46 schrieb Han, Huaitong:
>>> What is the host kernel version and host dmesg information? And does
>>> the problem ex
Am 26.01.2016 um 12:39 schrieb Yang Zhang:
> On 2016/1/26 18:43, Stefan Priebe - Profihost AG wrote:
>>
>> Am 26.01.2016 um 11:13 schrieb Yang Zhang:
>>> On 2016/1/26 15:22, Stefan Priebe - Profihost AG wrote:
>>>> Hi,
>>>>
>>>> Am 26.01
at do you mean with replace old binary file? I haven't tested Kernel
4.4 as we use 4.1 as it is a long term stable kernel release.
Stefan
> On Mon, 2016-01-25 at 14:51 +0100, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>> while running qemu 2.4 on whestmere CPUs i'm prett
Hi,
while running qemu 2.4 on whestmere CPUs i'm pretty often getting this
one while booting:
[0.811645] Switched APIC routing to physical x2apic.
[1.835678] [ cut here ]
[1.835704] WARNING: CPU: 0 PID: 1 at
arch/x86/kernel/apic/apic.c:1309
> - Original Message -
>> From: "Alexandre DERUMIER"
>> To: "ceph-devel"
>> Cc: "qemu-devel" , jdur...@redhat.com
>> Sent: Monday, November 9, 2015 5:48:45 AM
>> Subject: Re: [Qemu-devel] qemu : rbd block driver
] kernel_init+0xe/0xf0
[0.195715] [816347a2] ret_from_fork+0x42/0x70
[0.195719] [8161f6a0] ? rest_init+0x80/0x80
[0.195729] ---[ end trace cf665146248feec1 ]---
Stefan
Am 15.08.2015 um 20:44 schrieb Stefan Priebe:
Hi,
while switching to a FULL tickless kernel i
Hi,
while switching to a FULL tickless kernel i detected that all our VMs
produce the following stack trace while running under qemu 2.3.0.
[0.195160] HPET: 3 timers in total, 0 timers will be used for
per-cpu timer
[0.195181] hpet0: at MMIO 0xfed0, IRQs 2, 8, 0
[0.195188]
Am 27.07.2015 um 14:01 schrieb John Snow:
The following changes since commit f793d97e454a56d17e404004867985622ca1a63b:
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into
staging (2015-07-24 13:07:10 +0100)
are available in the git repository at:
Am 27.07.2015 um 14:28 schrieb John Snow:
On 07/27/2015 08:10 AM, Stefan Priebe - Profihost AG wrote:
Am 27.07.2015 um 14:01 schrieb John Snow:
The following changes since commit f793d97e454a56d17e404004867985622ca1a63b:
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream
Hi,
is there a way to query the current cpu model / type of a running qemu
machine?
I mean host, kvm64, qemu64, ...
Stefan
Am 15.07.2015 um 13:32 schrieb Andrey Korolyov:
On Wed, Jul 15, 2015 at 2:20 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
is there a way to query the current cpu model / type of a running qemu
machine?
I mean host, kvm64, qemu64, ...
Stefan
I believe that the most
Am 15.07.2015 um 22:15 schrieb Andrey Korolyov:
On Wed, Jul 15, 2015 at 11:07 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 15.07.2015 um 13:32 schrieb Andrey Korolyov:
On Wed, Jul 15, 2015 at 2:20 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
is there a way
Am 13.05.2015 um 21:05 schrieb Stefan Weil:
Am 13.05.2015 um 20:59 schrieb Stefan Priebe:
Am 13.05.2015 um 20:51 schrieb Stefan Weil:
Hi,
I just noticed this patch because my provider told me that my KVM based
server
needs a reboot because of a CVE (see this German news:
http://www.heise.de
Am 13.05.2015 um 20:51 schrieb Stefan Weil:
Hi,
I just noticed this patch because my provider told me that my KVM based
server
needs a reboot because of a CVE (see this German news:
Am 13.05.2015 um 21:04 schrieb John Snow:
On 05/13/2015 02:59 PM, Stefan Priebe wrote:
Am 13.05.2015 um 20:51 schrieb Stefan Weil:
Hi,
I just noticed this patch because my provider told me that my KVM based
server
needs a reboot because of a CVE (see this German news:
http://www.heise.de
Hi,
it started to work again with virtio 100 instead of 94. No idea why it
works with qemu 2.2.0.
Stefan
Am 24.03.2015 um 12:15 schrieb Stefan Priebe - Profihost AG:
Am 24.03.2015 um 11:45 schrieb Paolo Bonzini:
On 24/03/2015 11:39, Stefan Priebe - Profihost AG wrote:
after upgrading
Am 24.03.2015 um 11:45 schrieb Paolo Bonzini:
On 24/03/2015 11:39, Stefan Priebe - Profihost AG wrote:
after upgrading Qemu from 2.2.0 to 2.2.1
Windows 2012 R2 works after installing. But after applying 72 updates it
breaks with a black screen of death.
Can you bisect it?
Have to try
Hi,
after upgrading Qemu from 2.2.0 to 2.2.1
Windows 2012 R2 works after installing. But after applying 72 updates it
breaks with a black screen of death.
Linking to this KB:
https://support.microsoft.com/en-us/kb/2939259
It works fine with Qemu 2.2.0
Greets,
Stefan
Hi,
Thanks.
I fixed it - there is already a patchseries in 4.0 to fix this. It will
be backported in 3.18.10 or 3.18.11.
Stefan
Am 23.03.2015 um 12:54 schrieb Stefan Hajnoczi:
On Sun, Mar 15, 2015 at 09:59:25AM +0100, Stefan Priebe wrote:
after upgrading the host kernel from 3.12 to 3.18
Hi,
after upgrading the host kernel from 3.12 to 3.18 live migration fails
with the following qemu output (guest running on a host with 3.12 =
host with 3.18):
kvm: Features 0x30afffe3 unsupported. Allowed features: 0x79bfbbe7
qemu: warning: error while loading state for instance 0x0 of
Am 16.02.2015 um 16:50 schrieb Andreas Färber:
Am 16.02.2015 um 16:41 schrieb Stefan Priebe - Profihost AG:
Am 16.02.2015 um 15:49 schrieb Paolo Bonzini pbonz...@redhat.com:
On 16/02/2015 15:47, Stefan Priebe - Profihost AG wrote:
Could it be that this is a results of compiling qemu
Am 16.02.2015 um 16:49 schrieb Paolo Bonzini:
On 16/02/2015 16:41, Stefan Priebe - Profihost AG wrote:
Yes, just do nothing (--enable-debug-info is the default;
--enable-debug enables debug info _and_ turns off optimization).
If I do not enable-debug dh_strip does not extract any debugging
Hi,
Am 16.02.2015 um 13:24 schrieb Paolo Bonzini:
On 15/02/2015 19:46, Stefan Priebe wrote:
while i get a constant random 4k i/o write speed of 20.000 iops with
qemu 2.1.0 or 2.1.3. I get jumping speeds with qemu 2.2 (jumping between
500 iops and 15.000 iop/s).
If i use virtio instead
Hi,
Am 16.02.2015 um 15:44 schrieb Paolo Bonzini:
On 16/02/2015 15:43, Stefan Priebe - Profihost AG wrote:
Hi,
Am 16.02.2015 um 13:24 schrieb Paolo Bonzini:
On 15/02/2015 19:46, Stefan Priebe wrote:
while i get a constant random 4k i/o write speed of 20.000 iops with
qemu 2.1.0
Hi,
Am 16.02.2015 um 15:58 schrieb Andrey Korolyov and...@xdel.ru:
On Mon, Feb 16, 2015 at 5:47 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
Am 16.02.2015 um 15:44 schrieb Paolo Bonzini:
On 16/02/2015 15:43, Stefan Priebe - Profihost AG wrote:
Hi,
Am
Am 16.02.2015 um 15:49 schrieb Paolo Bonzini pbonz...@redhat.com:
On 16/02/2015 15:47, Stefan Priebe - Profihost AG wrote:
Could it be that this is a results of compiling qemu with --enable-debug
to get debugging symbols?
Yes.
*urg* my fault - sorry. Is there a way to get
Hi,
while i get a constant random 4k i/o write speed of 20.000 iops with
qemu 2.1.0 or 2.1.3. I get jumping speeds with qemu 2.2 (jumping between
500 iops and 15.000 iop/s).
If i use virtio instead of virtio-scsi speed is the same between 2.2 and
2.1.
Is there a known regression?
Greets,
Hi,
while migrating a bunch of VMs i saw multiple times segaults with qemu
2.1.2.
Is this a known bug?
Full backtrace:
Program terminated with signal 11, Segmentation fault.
#0 0x7ff9c73bca90 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0 0x7ff9c73bca90 in ?? () from
Tested-by: Stefan Priebe s.pri...@profihost.ag
Am 03.06.2014 11:21, schrieb Stefan Hajnoczi:
qemu_bh_schedule() is supposed to be thread-safe at least the first time
it is called. Unfortunately this is not quite true:
bh-scheduled = 1;
aio_notify(bh-ctx);
Since another thread may
Am 02.06.2014 um 15:40 schrieb Stefan Hajnoczi stefa...@gmail.com:
On Fri, May 30, 2014 at 04:10:39PM +0200, Stefan Priebe wrote:
even with
+From 271c0f68b4eae72691721243a1c37f46a3232d61 Mon Sep 17 00:00:00 2001
+From: Fam Zheng f...@redhat.com
+Date: Wed, 21 May 2014 10:42:13 +0800
Am 02.06.2014 15:40, schrieb Stefan Hajnoczi:
On Fri, May 30, 2014 at 04:10:39PM +0200, Stefan Priebe wrote:
even with
+From 271c0f68b4eae72691721243a1c37f46a3232d61 Mon Sep 17 00:00:00 2001
+From: Fam Zheng f...@redhat.com
+Date: Wed, 21 May 2014 10:42:13 +0800
+Subject: [PATCH] aio: Fix use
Am 02.06.2014 22:45, schrieb Paolo Bonzini:
Il 02/06/2014 21:32, Stefan Priebe ha scritto:
#0 0x7f69e421c43f in event_notifier_set (e=0x124) at
util/event_notifier-posix.c:97
#1 0x7f69e3e37afc in aio_notify (ctx=0x0) at async.c:246
#2 0x7f69e3e37697 in qemu_bh_schedule (bh
0x7f9dcdba513d in clone () from /lib/x86_64-linux-gnu/libc.so.6
#12 0x in ?? ()
Am 28.05.2014 21:44, schrieb Stefan Priebe:
is this:
commit 271c0f68b4eae72691721243a1c37f46a3232d61
Author: Fam Zheng f...@redhat.com
Date: Wed May 21 10:42:13 2014 +0800
aio: Fix use-after-free
in clone () from /lib/x86_64-linux-gnu/libc.so.6
#12 0x in ?? ()
Am 28.05.2014 21:44, schrieb Stefan Priebe:
is this:
commit 271c0f68b4eae72691721243a1c37f46a3232d61
Author: Fam Zheng f...@redhat.com
Date: Wed May 21 10:42:13 2014 +0800
aio: Fix use-after-free
Hello,
i mean since using qemu 2.0 i've now seen several times the following
segfault:
(gdb) bt
#0 0x7f2af1196433 in event_notifier_set (e=0x124) at
util/event_notifier-posix.c:97
#1 0x7f2af0db1afc in aio_notify (ctx=0x0) at async.c:246
#2 0x7f2af0db1697 in qemu_bh_schedule
is this:
commit 271c0f68b4eae72691721243a1c37f46a3232d61
Author: Fam Zheng f...@redhat.com
Date: Wed May 21 10:42:13 2014 +0800
aio: Fix use-after-free in cancellation path
Stefan
Am 28.05.2014 21:40, schrieb Stefan Priebe:
Hello,
i mean since using qemu 2.0 i've now seen several times
Hi,
i now was able to catch the error.
It is:
Length mismatch: :00:12.0/virtio-net-pci.rom: 4 in != 1
qemu: warning: error while loading state for instance 0x0 of device 'ram'
load of migration failed
Stefan
Am 09.05.2014 19:05, schrieb Paolo Bonzini:
Il 09/05/2014 15:13, Stefan
Am 14.05.2014 10:11, schrieb Paolo Bonzini:
Il 14/05/2014 09:17, Stefan Priebe - Profihost AG ha scritto:
i now was able to catch the error.
It is:
Length mismatch: :00:12.0/virtio-net-pci.rom: 4 in != 1
qemu: warning: error while loading state for instance 0x0 of device 'ram
Am 14.05.2014 10:36, schrieb Paolo Bonzini:
Il 14/05/2014 10:29, Stefan Priebe - Profihost AG ha scritto:
Hi,
i compile qemu on my own.
I have the rom files under /usr/share/kvm and they look like this:
ls -la /usr/share/kvm/*.rom
-rw-r--r-- 1 root root 173568 May 14 09:39 /usr/share/kvm
Am 14.05.2014 11:00, schrieb Paolo Bonzini:
Il 14/05/2014 10:38, Stefan Priebe - Profihost AG ha scritto:
Currently it has the same as i already updated the package there too.
So you mean i had done a mistake compiling the old package - so it had
wrong sizes?
Yes, probably.
Can you do
Hello list,
i was trying to migrate older Qemu (1.5 and 1.7.2) to a machine running
Qemu 2.0.
I started the target machine with:
-machine type=pc-i440fx-1.5 / -machine type=pc-i440fx-1.7
But the migration simply fails. Migrating Qemu 2.0 to Qemu 2.0 succeeds.
I see no output at the monitor of
Am 09.05.2014 um 15:41 schrieb Dr. David Alan Gilbert dgilb...@redhat.com:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Hello list,
i was trying to migrate older Qemu (1.5 and 1.7.2) to a machine running
Qemu 2.0.
I started the target machine with:
-machine type=pc
Am 09.05.2014 18:29, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Am 09.05.2014 um 15:41 schrieb Dr. David Alan Gilbert dgilb...@redhat.com:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Hello list,
i was trying to migrate
)
Am 09.05.2014 19:05, schrieb Paolo Bonzini:
Il 09/05/2014 15:13, Stefan Priebe - Profihost AG ha scritto:
I see no output at the monitor of Qemu 2.0.
# migrate -d tcp:a.b.c.d:6000
# info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: on
zero-blocks: off
Migration
Hello Igor,
while testing your patchset i came to a very stupid problem.
I wanted to test migration and it cames out that the migration works
fine after plugging in memory only if i run the target VM without the
-daemonize option.
If i enable the -daemonize option the target vm tries to read
Hello,
i've 4 USB serial devices and one HID device i want to pass to a guest.
The passing itself works fine but while the guest has 0 load or cpu
usage the qemu process itself has around 40% cpu usage on a single 3,2
ghz E3 xeon.
I already tried xhci but it doesn't change anything. Also the
Am 24.02.2014 17:13, schrieb Eric Blake:
On 02/24/2014 08:00 AM, Stefan Hajnoczi wrote:
What is the right way to check for enough free memory and memory
usage of a specific vm?
I would approach it in terms of guest RAM allocation plus QEMU overhead:
host_ram = num_guests *
Am 14.02.2014 15:59, schrieb Stefan Hajnoczi:
On Tue, Feb 11, 2014 at 07:32:46PM +0100, Stefan Priebe wrote:
Am 11.02.2014 17:22, schrieb Peter Lieven:
Am 11.02.2014 um 16:44 schrieb Stefan Hajnoczi stefa...@gmail.com:
On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
s.pri
Am 14.02.2014 16:03, schrieb Stefan Hajnoczi:
On Tue, Feb 11, 2014 at 07:30:54PM +0100, Stefan Priebe wrote:
Am 11.02.2014 16:44, schrieb Stefan Hajnoczi:
On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
in the past (Qemu 1.5) a migration failed
Am 13.02.2014 21:06, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
i could fix it by explicitly disable xbzrle - it seems its
automatically on if i do not set
1 - 100 of 277 matches
Mail list logo