Hi Stefan,
Indeed, this is really strange.
Could it be a systemd bug ? (as you use proxmox, and proxmox use systemd scope
to launch vm ?)
- Mail original -
De: "Stefan Priebe, Profihost AG"
À: "qemu-devel"
Envoyé: Dimanche 16 Septembre 2018 15:30:34
Objet: [Qemu-devel] Overcommiting
Hi,
proxmox users have reported this bug
https://forum.proxmox.com/threads/high-cpu-load-for-windows-10-guests-when-idle.44531/#post-213876
hv_synic && hv_stimer hyperv enlightments fix it
(seem to be related to some hpet change in windows)
- Mail original -
De: "Lemos Lemosov"
>>Heh. I have stopped pushing my patches (and scratched a few itches with
>>patchew instead) because I'm still a bit burned out from recent KVM
>>stuff, but this may be the injection of enthusiasm that I needed. :)
Thanks Paolo for your great work on multiqueue, that's a lot of work since the
Sorry, I just find that the problem is in our proxmox implementation,
as we use a socat tunnel for the nbd mirroring, with a timeout of 30s in case
of inactivity.
So, not a qemu bug.
Regards,
Alexandre
- Mail original -
De: "aderumier"
À: "qemu-devel"
Hi,
I currently have failing mirroring jobs to nbd, when multiple jobs are running
in parallel.
step to reproduce, with 2 disks:
1) launch mirroring job of first disk to remote target nbd.(to qemu running
target)
2) wait until is reach ready = 1 , do not complete
3) launch mirroring job of
Thanks Paolo !
Do we need to update guest kernel too, if qemu use cpumodel=qemu64 ?
(For example, I have some very old guests where kernel update is not possible)
Regards,
Alexandre
- Mail original -
De: "pbonzini"
À: "qemu-devel"
Cc:
- Mail original -
De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
À: "aderumier" <aderum...@odiso.com>
Cc: "qemu-devel" <qemu-devel@nongnu.org>
Envoyé: Jeudi 4 Janvier 2018 09:17:41
Objet: Re: [Qemu-devel] CVE-2017-5715: relevant qem
does somebody have a redhat account to see te content of:
https://access.redhat.com/solutions/3307851
"Impacts of CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 to Red Hat
Virtualization products"
- Mail original -
De: "aderumier"
À: "Stefan Priebe, Profihost AG"
>>Can anybody point me to the relevant qemu patches?
I don't have find them yet.
Do you known if a vm using kvm64 cpu model is protected or not ?
- Mail original -
De: "Stefan Priebe, Profihost AG"
À: "qemu-devel"
Envoyé: Jeudi 4 Janvier
Hi Stefan,
>>The tap devices on the target vm shows dropped RX packages on BOTH tap
>>interfaces - strangely with the same amount of pkts?
that's strange indeed.
if you tcpdump tap interfaces, do you see incoming traffic only on 1 interface,
or both random ?
(can you provide the network
Hi,
has somebody reviewed this patch ?
I'm also able de reproduce the vm crash like the proxmox user.
This patch is fixing it for me too.
Regards,
Alexandre
- Mail original -
De: "Wolfgang Bumiller"
À: "qemu-devel"
Cc: "pbonzini"
h": false, "direct": false, "writeback": true}, "file":
"target.qcow2", "encryption_key_missing": false}, {"iops_rd": 0,
"detect_zeroes": "off", "image": {"virtual-size": 1074135040, "fi
uot;bps": 0, "bps_rd": 0, "cache":
{"no-flush": false, "direct": false, "writeback": true}, "file":
"target.qcow2", "encryption_key_missing": false}, {"iops_rd": 0,
"detect_zeroes": "off",
ail original -
De: "Kashyap Chamarthy" <kcham...@redhat.com>
À: "aderumier" <aderum...@odiso.com>
Cc: "qemu-devel" <qemu-devel@nongnu.org>
Envoyé: Mercredi 19 Avril 2017 12:43:00
Objet: Re: [Qemu-devel] blockdev-mirror , how to replace old nodenam
).
- Mail original -
De: "Fam Zheng" <f...@redhat.com>
À: "aderumier" <aderum...@odiso.com>
Cc: "qemu-devel" <qemu-devel@nongnu.org>
Envoyé: Mercredi 19 Avril 2017 11:02:36
Objet: Re: [Qemu-devel] blockdev-mirror , how to replace old nodename by
Hi,
I'm trying to implement blockdev-mirror, to replace drive-mirror as we can pass
more options with blockdev-mirror.
I would like to mirror an attached blockdev to a new blockdev, then switch at
the end of block-job-complete, like for drive-mirror.
qemu command line (vm-138-disk-1.qcow2
Pretty awesome news ! Congrat !
So, can we update the wiki changelog ?
http://wiki.qemu-project.org/ChangeLog/2.9
"QMP command blockdev-add is still a work in progress. It doesn't support all
block drivers, it lacks a matching blockdev-del, and more. It might change
incompatibly."
-
>>No yet. I just test on one qemu-kvm vm. It works fine.
>>The performance may need more time.
>>Any one can test on this patch if you do fast
Hi, I would like to bench it with small 4k read/write.
On the ceph side,do we need this PR ? :
https://github.com/ceph/ceph/pull/13447
-
Hi,
proxmox users have reported recently corruption with qemu 2.7 and scsi-block
(with passing physical /dev/sdX to virtio-scsi).
working fine with qemu 2.6.
qemu 2.7 + scsi-hd works fine
https://forum.proxmox.com/threads/proxmox-4-4-virtio_scsi-regression.31471/page-2
- Mail original
ation required before option 3" with
calling drive_mirror to nbd.
Any idea ?
Regards,
Alexandre Derumier
el" <qemu-devel@nongnu.org>
Envoyé: Mercredi 14 Décembre 2016 21:36:23
Objet: Re: [Qemu-devel] any known virtio-net regressions in Qemu 2.7?
Am 14.12.2016 um 16:33 schrieb Alexandre DERUMIER:
> Hi Stefan,
>
> do you have upgraded kernel ?
Yes sure. But i'm out of ideas
Hi Stefan,
do you have upgraded kernel ?
maybe it could be related to vhost-net module too.
- Mail original -
De: "Stefan Priebe, Profihost AG"
À: "qemu-devel"
Envoyé: Mercredi 14 Décembre 2016 16:04:08
Objet: [Qemu-devel] any known
Hello,
I'm looking to implement cpu hotplug,
and I have a question about cpu flags
currently I have something like
-cpu qemu64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce
-smp 4,sockets=2,cores=2,maxcpus=4
Does I need to define flags like:
-smp 2,sockets=2,cores=2,maxcpus=4
-device
ceph force writethrough until a flush is detected.
With cache=unsafe, we never send flush.
So we need to tell to ceph
to set rbd_cache_writethrough_until_flush=false in this case.
This speedup a lot qemu-img convert which use cache=unsafe by default
Signed-off-by: Alexandre Derumier <ade
ture to not open several times the same backend ?
I'm thinking about ceph/librbd, which since last version allow only to open
once a backend by default
(exclusive-lock, which is a requirement for advanced features like
rbd-mirroring, fast-diff,....)
Regards,
Alexandre Derumier
- Mail or
>>... but some actual migration testing would be great indeed.
I have sent a patched version build to proxmox users for testing,
I'm waiting for their results
- Mail original -
De: "kraxel"
À: "Laszlo Ersek"
Cc: "qemu-devel"
Hi,
Proxmox users have reported same bug (qemu 2.5 with pc-i440fc-2.4 not migrating
to qemu 2.4.1)
https://forum.proxmox.com/threads/cant-live-migrate-after-dist-
upgrade.26097/
I don't have verified yet, but it seem to be related.
--
You received this bug notification because you are a
Hi,
I (and promox users) have same problem than this bugreport
https://bugs.launchpad.net/qemu/+bug/1536487
https://forum.proxmox.com/threads/cant-live-migrate-after-dist-upgrade.26097/
Migrating from qemu 2.5 with pc-i440fx-2.4 to qemu 2.4.1 don't work.
qemu-system-x86_64: error while loading
- Mail original -
De: "Vasiliy Tolstov" <v.tols...@selfip.ru>
À: "aderumier" <aderum...@odiso.com>
Cc: "qemu-devel" <qemu-devel@nongnu.org>
Envoyé: Mercredi 25 Novembre 2015 11:48:11
Objet: Re: [Qemu-devel] poor virtio-scsi performance (fio testing)
Maybe could you try to create 2 disk in your vm, each with 1 dedicated iothread,
then try to run fio on both disk at the same time, and see if performance
improve.
But maybe they are some write overhead with lvmthin (because of copy on write)
and sheepdog.
Do you have tried with classic lvm
ot; <aderum...@odiso.com>
Cc: "qemu-devel" <qemu-devel@nongnu.org>
Envoyé: Mercredi 25 Novembre 2015 11:12:33
Objet: Re: [Qemu-devel] poor virtio-scsi performance (fio testing)
2015-11-25 13:08 GMT+03:00 Alexandre DERUMIER <aderum...@odiso.com>:
> Maybe coul
e Ceph tracker ticket I opened
[1], that would be very helpful.
[1] http://tracker.ceph.com/issues/13726
Thanks,
Jason
- Original Message -
> From: "Alexandre DERUMIER" <aderum...@odiso.com>
> To: "ceph-devel" <ceph-de...@vger.kernel.org>
>
h-devel"
<ceph-de...@vger.kernel.org>, "qemu-devel" <qemu-devel@nongnu.org>
Envoyé: Lundi 9 Novembre 2015 08:22:34
Objet: Re: [Qemu-devel] qemu : rbd block driver internal snapshot and vm_stop
is hanging forever
On 11/09/2015 10:19 AM, Denis V. Lunev wr
Also,
this occur only with rbd_cache=false or qemu drive cache=none.
If I use rbd_cache=true or qemu drive cache=writeback, I don't have this bug.
- Mail original -
De: "aderumier"
À: "ceph-devel" , "qemu-devel"
Hi,
with qemu (2.4.1), if I do an internal snapshot of an rbd device,
then I pause the vm with vm_stop,
the qemu process is hanging forever
monitor commands to reproduce:
# snapshot_blkdev_internal drive-virtio0 yoursnapname
# stop
I don't see this with qcow2 or sheepdog block driver for
Some other infos:
I can reproduce it too with manual snapshot with rbd command
#rbd --image myrbdvolume snap create --snap snap1
qemu monitor:
#stop
This is with ceph hammer 0.94.5.
in qemu vm_stop, the only thing related to block driver are
bdrv_drain_all();
ret =
Something is really wrong,
because guest is also freezing, with a simple snapshot, with cache=none /
rbd_cache=false
qemu monitor : snapshot_blkdev_internal drive-virtio0 snap1
or
rbd command : rbd --image myrbdvolume snap create --snap snap1
Then the guest can't read/write to disk
>>Hi,
Hi
>>
>>I've seen the same issue with debian jessie.
>>
>>Compiled 4.2.3 from kernel.org with "make localyesconfig",
>>no problem any more
host kernel or guest kernel ?
- Mail original -
De: "Markus Breitegger" <1494...@bugs.launchpad.net>
À: "qemu-devel"
;
}
I'm not sure if it's a qemu bug or kernel/kvm bug.
Help is welcome.
Regards,
Alexandre Derumier
Hi,
Ovs documentation say that 1GB hugepage are needed
https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md
is is true ? (as the wiki say 2M hugepages)
- Mail original -
De: "Star Chang"
À: "Marcel Apfelbaum"
Cc: "qemu-devel"
Hi,
I confirm this bug,
I have seen this a lot of time with debian jessie (kernel 3.16) and
ubuntu (kernel 4.X) with qemu 2.2 and qemu 2.3
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350
Hi,
proxmox users report same bug here with qemu 2.4:
http://forum.proxmox.com/threads/23346-Proxmox-4b1-q35-machines-failing-to-reboot-problems-with-PCI-passthrough
we are going to test with reverting the commit to see if it's help.
- Mail original -
De: "Peter Maloney"
Hi,
I have add this bug today on 3 debian jessie guests (kernel 3.16), after
migration from qemu 2.3 to qemu 2.4.
Is it a qemu bug or guest kernel 3.16 ?
Regards,
Alexandre Derumier
- Mail original -
De: Michael Tokarev m...@tls.msk.ru
À: qemu-devel qemu-devel@nongnu.org, debian-ad
hanging vm (2.4rc1 works fine)
On 29/07/2015 06:50, Alexandre DERUMIER wrote:
seem to come from this commit:
http://git.qemu.org/?p=qemu.git;a=commit;h=eabc977973103527bbb8fed69c91cfaa6691f8ab
AioContext: fix broken ctx-dispatching optimization
Stefan has a set of patches that fix it.
Paolo
seem to come from this commit:
http://git.qemu.org/?p=qemu.git;a=commit;h=eabc977973103527bbb8fed69c91cfaa6691f8ab
AioContext: fix broken ctx-dispatching optimization
- Mail original -
De: aderumier aderum...@odiso.com
À: qemu-devel qemu-devel@nongnu.org
Envoyé: Mercredi 29 Juillet 2015
Hi,
since qemu 2.4rc2, when I'm using iothreads, the vm is hanging, qmp queries are
not working, vnc not working. (qemu process don't crash).
qemu 2.4 rc1 works fine. (don't have bisect it yet)
qemu command line:
qemu -chardev socket,id=qmp,path=/var/run/qemu-server/150.qmp,server,nowait
Thinking about this again, I doubt
that lengthening the duration with a hardcoded value benifits everyone; and
before Alexandre's reply on old server/slow disks
With 1ms sleep, I can reproduce the hang 100% with a fast cpu (xeon e5 v3
3,1ghz) and source raw file on nfs.
- Mail original
: Stefan Hajnoczi stefa...@gmail.com, Kevin Wolf kw...@redhat.com,
qemu-devel qemu-devel@nongnu.org, qemu-bl...@nongnu.org
Envoyé: Vendredi 10 Juillet 2015 09:13:33
Objet: Re: [Qemu-devel] [Qemu-block] [PATCH 0/3] mirror: Fix guest
responsiveness during bitmap scan
On Fri, 07/10 08:54, Alexandre
guest
responsiveness during bitmap scan
On Fri, 07/10 08:54, Alexandre DERUMIER wrote:
Thinking about this again, I doubt
that lengthening the duration with a hardcoded value benifits everyone; and
before Alexandre's reply on old server/slow disks
With 1ms sleep, I can reproduce the hang
doesn't get a
fair share of main loop / BQL.
Introduce block_job_relax_cpu which will at least for
BLOCK_JOB_RELAX_CPU_NS. Existing block_job_sleep_ns(job, 0) callers can
be replaced by this later.
Reported-by: Alexandre DERUMIER aderum...@odiso.com
Signed-off-by: Fam Zheng f...@redhat.com
] which is currently in Jeff's tree.
Although [1] fixed the QMP responsiveness, Alexandre DERUMIER reported that
guest responsiveness still suffers when we are busy in the initial dirty bitmap
scanning loop of mirror job. That is because 1) we issue too many lseeks; 2) we
only sleep for 0 ns which
, Alexandre DERUMIER reported that
guest responsiveness still suffers when we are busy in the initial dirty
bitmap
scanning loop of mirror job. That is because 1) we issue too many lseeks; 2)
we
only sleep for 0 ns which turns out quite ineffective in yielding BQL to vcpu
threads. Both
Hi,
I'm currently testing -cpu host,enforce,
and it's failing to start on amd processors (tested with opteron 61XX,opteron
63xx,FX-6300 and FX-9590)
Is it expected ?
warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 0]
warning: host doesn't support requested feature:
that equals to a yield + enter with no
intermission in between (the timer fires immediately in the same
iteration of event loop), which means other code still doesn't get a
fair share of main loop / BQL.
Trim the sleep duration with a minimum value.
Reported-by: Alexandre DERUMIER aderum
Hi,
I'm currently testing this patch,
and it still hanging for me (mirroring a raw file from nfs)
I just wonder why block_job_sleep_ns is set to 0 ?
+block_job_sleep_ns(s-common, QEMU_CLOCK_REALTIME, 0);
I have tried to increasing it to a bigger value
+
With nfs as source storage, it's really slow currently (lseek slow + a lot of
nfs ops).
it's blocked around 30min for 300GB, with a raw file on a netapp san array
through nfs.
- Mail original -
De: aderumier aderum...@odiso.com
À: Fam Zheng f...@redhat.com
Cc: Kevin Wolf
Hi,
There is no problem, the observasion by Andrey was just that qmp command
takes
a few minutes before returning, because he didn't apply
https://lists.gnu.org/archive/html/qemu-devel/2015-05/msg02511.html
Is this patch already apply on the block tree ?
With nfs as source storage, it's
, Alexandre DERUMIER wrote:
What is the peak memory usage of jemalloc and tcmalloc?
I'll try to use an heap profiling to see.
Don't known if perf can give me the info easily without profiling ?
You can try using or modifying this systemtap script:
https://sourceware.org/systemtap/examples
@nongnu.org
Envoyé: Mardi 23 Juin 2015 09:57:19
Objet: Re: [PATCH] configure: Add support for jemalloc
On 19/06/2015 12:56, Alexandre Derumier wrote:
This adds --enable-jemalloc and --disable-jemalloc to allow linking
to jemalloc memory allocator.
We have already tcmalloc support
4 disks 41076
8 disks 43312
15 disks 37569
tcmalloc : 256M cache
---
1 disk 33914
2 disks 58839
4 disks 148205
8 disks 213298
15 disks 218383
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
configure | 29 +
1 file changed
disks 67698
4 disks 41076
8 disks 43312
15 disks 37569
tcmalloc : 256M cache
---
1 disk 33914
2 disks58839
4 disks148205
8 disks213298
15 disks 218383
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
configure | 29
Sorry, forgot to mention - of course I`ve pulled all previous
zeroing-related queue, so I haven`t had only the QMP-related fix
running on top of the master :)
Hi, I had a discussion about rbd mirroring, detect-zeroes and sparse target,
some months ago with paolo
Hi,
if you want to use multiqueues in guest, you need to enabled it on virtio-scsi
controller.
controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/
for example.
- Mail original -
De: Vasiliy Tolstov v.tols...@selfip.ru
À: qemu-devel qemu-devel@nongnu.org,
Hi,
I'm currently playing with drive-mirror, (qemu 2.2)
and I have qmp hang when drive-mirror is starting.
just after qmp drive-mirror exec, qmp socket or hmp are not responding.
After some times it's working again, and I can see result of query-block-jobs.
The source volume is on nfs (v4),
On 11/05/2015 13:38, Alexandre DERUMIER wrote:
Hi,
I'm currently playing with drive-mirror, (qemu 2.2)
and I have qmp hang when drive-mirror is starting.
just after qmp drive-mirror exec, qmp socket or hmp are not
responding.
After some times it's working again, and I can see result
Hi,
they are a lot of patches sent on the mailing list recently,
cpu hotplug|unplug with device_add|del
https://www.redhat.com/archives/libvir-list/2015-February/msg00084.html
- Mail original -
De: Fahri Cihan Demirci cih...@skyatlas.com
À: qemu-devel qemu-devel@nongnu.org
Envoyé:
Hi,
Isn't it related to drive options ?
werror=action,rerror=action
Specify which action to take on write and read errors. Valid actions are:
“ignore” (ignore the error and try to continue), “stop” (pause QEMU), “report”
(report the error to the guest), “enospc” (pause QEMU only if the host
Ok,
thanks paolo !
- Mail original -
De: pbonzini pbonz...@redhat.com
À: aderumier aderum...@odiso.com, qemu-devel qemu-devel@nongnu.org
Envoyé: Mercredi 1 Avril 2015 12:27:27
Objet: Re: virtio-scsi + iothread : segfault on drive_del
On 01/04/2015 05:34, Alexandre DERUMIER wrote
Hi,
I'm currently testing virtio-scsi and iothread,
and I'm seeing qemu segfault when I try to remove an scsi drive
on top of an virtio-scsi controller with iothread enabled.
virtio-blk + iothread drive_del is supported since this patch
...@odiso.com
Cc: qemu-devel qemu-devel@nongnu.org, dietmar diet...@proxmox.com
Envoyé: Mardi 10 Mars 2015 14:30:20
Objet: Re: [Qemu-devel] balloon stats not working if qemu is started with
-machine option
On Mon, 9 Mar 2015 08:04:54 +0100 (CET)
Alexandre DERUMIER aderum...@odiso.com wrote:
I have
I have forgot to said that we don't setup pooling interval manually. (which
seem to works fine without -machine)
Now,if I setup guest-stats-polling-interval with qom-set,
it seem to works fine with -machine option.
- Mail original -
De: aderumier aderum...@odiso.com
À: qemu-devel
Hi,
I have noticed that balloon stats are not working if a qemu guest is started
with -machine option.
(-machine pc, or any version) . Tested of qemu 1.7,2.1 2.2
When the guest is starting (balloon driver not yet loaded)
$VAR1 = {
'last-update' = 0,
'stats' = {
Hi,
I think they was already reported some month ago,
and a patch was submitted to the mailing list (but waiting that memory unplug
was merged before apply it)
http://lists.gnu.org/archive/html/qemu-devel/2014-11/msg02362.html
- Mail original -
De: Luiz Capitulino
H Stefan,
only for write ? or also read ?
I'll try to reproduce on my test cluster.
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: qemu-devel qemu-devel@nongnu.org
Envoyé: Dimanche 15 Février 2015 19:46:12
Objet: [Qemu-devel] slow speed for virtio-scsi since qemu 2.2
Hi,
, 2015-01-26 at 04:19 +0100, Alexandre DERUMIER wrote:
Thanks for your reply.
2 others things:
1)
on cpu unplug, I see that the cpu is correctly removed from my linux guest
but not from qemu
About this, I can do it successfully on my qemu.
So can you tell us more information about
: [Qemu-devel] [PATCH v2 00/11] cpu: add i386 cpu hot remove support
On Fri, 2015-01-23 at 11:24 +0100, Alexandre DERUMIER wrote:
Hello,
I'm currently testing the new cpu unplug features,
Works fine here with debian guests and kernel 3.14.
Thanks for your test.
But I have notice some
chen.fan.f...@cn.fujitsu.com, Igor Mammedov imamm...@redhat.com,
afaerber afaer...@suse.de
Envoyé: Lundi 26 Janvier 2015 03:01:48
Objet: Re: [Qemu-devel] [PATCH v2 00/11] cpu: add i386 cpu hot remove support
On Fri, 2015-01-23 at 11:24 +0100, Alexandre DERUMIER wrote:
Hello,
I'm currently
-smp 2,sockets=2,cores=1,maxcpus=4
Then I can hotplug|unplug cpuid = 2
Regards,
Alexandre Derumier
- Mail original -
De: Zhu Guihua zhugh.f...@cn.fujitsu.com
À: qemu-devel qemu-devel@nongnu.org
Cc: Zhu Guihua zhugh.f...@cn.fujitsu.com, tangc...@cn.fujitsu.com, guz
fnst guz.f
(win2012r2)
On Tue, Jan 20, 2015 at 10:06 PM, Alexandre DERUMIER
aderum...@odiso.com wrote:
Hi,
I have tried with numa enabled, and it's still don't work.
Can you send me your vm qemu command line ?
Also, with numa I have notice something strange with info numa command.
starting with -smp
, 2015 at 4:35 PM, Andrey Korolyov and...@xdel.ru wrote:
On Fri, Jan 9, 2015 at 1:26 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
I'm currently testing cpu hotplug with a windows 2012R2 standard guest,
and I can't get it too work. (works fine with linux guest).
host kernel
wrote:
On Fri, Jan 9, 2015 at 1:26 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
I'm currently testing cpu hotplug with a windows 2012R2 standard guest,
and I can't get it too work. (works fine with linux guest).
host kernel : rhel7 3.10 kernel
qemu 2.2
qemu command
)
- Mail original -
De: Igor Mammedov imamm...@redhat.com
À: aderumier aderum...@odiso.com
Cc: qemu-devel qemu-devel@nongnu.org
Envoyé: Lundi 19 Janvier 2015 17:06:37
Objet: Re: [Qemu-devel] cpu hotplug and windows guest (win2012r2)
On Fri, 9 Jan 2015 11:26:08 +0100 (CET)
Alexandre DERUMIER
Hi,
I'm currently testing cpu hotplug with a windows 2012R2 standard guest,
and I can't get it too work. (works fine with linux guest).
host kernel : rhel7 3.10 kernel
qemu 2.2
qemu command line : -smp cpus=1,sockets=2,cores=1,maxcpus=2
Started with 1cpu, topogoly is 2sockets with 1cores.
DERUMIER aderum...@odiso.com, qemu-devel
qemu-devel@nongnu.org
Envoyé: Vendredi 14 Novembre 2014 09:37:02
Objet: Re: iothread object hotplug ?
On 14/11/2014 08:25, Alexandre DERUMIER wrote:
Hi,
I would like to known if it's possible to hot-add|hot-plug an iothread object
on a running guest
Hi,
I would like to known if it's possible to hot-add|hot-plug an iothread object
on a running guest ?
(I would like to be able to hotplug new virtio devices on new iothread at the
same time)
Regards,
Alexandre
You are missing debug information unfortunately,
Ok thanks, I'll try to add qemu debug symbols.
(I have already libc6,librbd,librados debug symbols installed)
- Mail original -
De: Paolo Bonzini pbonz...@redhat.com
À: Alexandre DERUMIER aderum...@odiso.com, Stefan Hajnoczi
stefa
: Stefan Hajnoczi stefa...@redhat.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: qemu-devel qemu-devel@nongnu.org, josh durgin
josh.dur...@inktank.com
Envoyé: Vendredi 24 Octobre 2014 11:04:06
Objet: Re: is it possible to use a disk with multiple iothreads ?
On Thu, Oct 23, 2014 at 08:26:17PM +0200
Hi,
I was reading this interesting presentation,
http://vmsplice.net/~stefan/stefanha-kvm-forum-2014.pdf
and I have a specific question.
I'm currently evaluate ceph/rbd storage performance through qemu,
and the current bottleneck seem to be the cpu usage of the iothread.
(rbd procotol cpu
:on
#du -sh source.qcow2 : 2M
drive-mirror source.qcow2 - target.qcow2
# info block
drive-virtio1: /target.qcow2 (qcow2)
#du -sh target.qcow2 : 11G
- Mail original -
De: Paolo Bonzini pbonz...@redhat.com
À: Alexandre DERUMIER aderum...@odiso.com, qemu-devel
qemu-devel@nongnu.org
Cc
Ah, you're right. We need to add an options field, or use a new
blockdev-mirror command.
Ok, thanks. Can't help to implement this, but I'll glad to help for testing.
- Mail original -
De: Paolo Bonzini pbonz...@redhat.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Ceph Devel ceph
doing fstrim inside the guest
(with virtio-scsi + discard),
and like this I can free space on rbd storage.
- Mail original -
De: Andrey Korolyov and...@xdel.ru
À: Fam Zheng f...@redhat.com
Cc: Alexandre DERUMIER aderum...@odiso.com, qemu-devel
qemu-devel@nongnu.org, Ceph Devel ceph-de
zero block), and drive mirror take around 5min.
- Mail original -
De: Fam Zheng f...@redhat.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: qemu-devel qemu-devel@nongnu.org, Ceph Devel
ceph-de...@vger.kernel.org
Envoyé: Samedi 11 Octobre 2014 10:25:35
Objet: Re: [Qemu-devel] qemu drive
;
possibilities include full for all the disk, top for only the sectors
allocated in the topmost image.
(what is topmost image ?)
- Mail original -
De: Fam Zheng f...@redhat.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: qemu-devel qemu-devel@nongnu.org, Ceph Devel
ceph-de...@vger.kernel.org
Hi,
It seem that drive-mirror block job, remove the detect-zeroes drive property on
the target drive
qemu
-device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5
-drive
file=/source.raw,if=none,id=drive-scsi2,cache=writeback,discard=on,aio=native,detect-zeroes=unmap
# info block
is
sparse after conversion.
Could it be related to the bdrv_co_write_zeroes missing features in
block/rbd.c ?
(It's available in other block drivers (scsi,gluster,raw-aio) , and I don't
have this problem with theses block drivers).
Regards,
Alexandre Derumier
Hi,
x-data-plane syntax is deprecated (should be remove in qemu 2.2),
it's using now iothreads
http://comments.gmane.org/gmane.comp.emulators.qemu/279118
qemu -object iothread,id=iothread0 \
-drive if=none,id=drive0,file=test.qcow2,format=qcow2 \
-device
Hi Paolo,
do you you think it'll be possible to use block jobs with dataplane ?
Or is it technically impossible ?
- Mail original -
De: Paolo Bonzini pbonz...@redhat.com
À: Alexandre DERUMIER aderum...@odiso.com, Scott Sullivan
ssulli...@liquidweb.com
Cc: qemu-devel@nongnu.org
Envoyé
Ok, Great :)
Thanks !
- Mail original -
De: Paolo Bonzini pbonz...@redhat.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: qemu-devel@nongnu.org, Scott Sullivan ssulli...@liquidweb.com
Envoyé: Vendredi 3 Octobre 2014 17:33:00
Objet: Re: is x-data-plane considered stable ?
Il 03/10
behavior if qemu is started with pc-dimm devices)
qemu 2.1
Guest kernel : 3.12.
Does it need a guest balloon module update ?
Regards,
Alexandre Derumier
Hi, I can't use virtio-serial, with q35 machine, on a pci bridge (other devices
works fine).
Is it a known bug ?
error message:
---
kvm: -device virtio-serial,id=spice,bus=pci.0,addr=0x9: Bus 'pci.0' not found
architecture is:
pcie.0
---pcidmi (i82801b11-bridge)
1 - 100 of 183 matches
Mail list logo